Search Results

Search found 5810 results on 233 pages for 'offsite backup'.

Page 38/233 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • HAProxy with 'backup' failure

    - by A.RG
    I have a simple configuration of 2 MySQL being load balanced by HAProxy. For an unfortunate reason I need to use them in Passive\Active mode. So I thought I'd configure one DB as 'backup' and go to sleep. But I was wrong. Whenever I add the 'backup' to the server line HAProxy throws a communication link error (essentially saying 'no DB available" (with the 'backup' it works great). It just doesnt consider that server as a valid option any more... I have tried this configuration: listen mysql 10.0.0.109:3307 mode tcp balance roundrobin option httpchk server db01 10.0.0.236:3306 server db02 10.0.0.68:3306 backup and also this configuration: frontend mysql_proxy bind 10.0.0.109:3307 default_backend mysql backend mysql mode tcp balance roundrobin option httpchk server db01 10.0.0.236:3306 server db02 10.0.0.68:3306 backup Nothing worked! Can anyone point me in the right direction? Thanks

    Read the article

  • wbadmin incremental system state backup

    - by user74513
    I am doing system state backups on a Windows Server 2008 R2 Enterprise (Service Pack 1) machine and expected the backups after the first one to be incremental. However with each backup a new directory with vhd files are created and the vhd files are almost the same size as the with the first backup. So the backups does not seem to be incremental. I used the following command to do the backup: wbadmin start systemstatebackup -backupTarget:f: I played around with the settings under "Configure Performance Settings" in the Windows Server Backup plugin in Server Manager but according to the description at the top of the dialog these settings are not applied to system state backups. Are there any settings available for wbadmin system state backup to make the backups incremental?

    Read the article

  • wbadmin incremental system state backup

    - by user74513
    I am doing system state backups on a Windows Server 2008 R2 Enterprise (Service Pack 1) machine and expected the backups after the first one to be incremental. However with each backup a new directory with vhd files are created and the vhd files are almost the same size as the with the first backup. So the backups does not seem to be incremental. I used the following command to do the backup: wbadmin start systemstatebackup -backupTarget:f: I played around with the settings under "Configure Performance Settings" in the Windows Server Backup plugin in Server Manager but according to the description at the top of the dialog these settings are not applied to system state backups. Are there any settings available for wbadmin system state backup to make the backups incremental?

    Read the article

  • Restoring a Windows 7 backup onto Server 2008 R2

    - by Colin Desmond
    I have recently moved my main machine from Windows 7 to Server 2008 R2. Just before I did the install, I used the Windows Backup facility to create a backup on a file share. I am now using the Server Backup facility in 2008 R2 to restore this and it is not working. When I use another Win7 machine, I point the restore program at the folder containing the backup fileset and it lets me look through them. In R2, I point it at the same folder and it tells me that "The specified remote share does not contain any backup." I naively assumed that the two systems would be compatible, is this not the case? Is there anyway to get that data back again?

    Read the article

  • Entire filesystem restore from rdiff-backup snapshot

    - by atmosx
    I'm trying to make a complete system restore from an rdiff-backup. The cli for backing was: rdiff-backup --exclude-special-files --exclude /tmp --exclude /mnt --exclude /proc --exclude /sys / /mnt/backup/ebox/ I created a new partition mounted the partition at /mnt/gentoo and did: rdiff-backup -r /mnt/vol2 /mnt/gentoo However when I try to chroot to this system (following gentoo's manual, which means mounting /dev/ and /proc) I get the following error: chroot: failed to run command/bin/bash': No such file or directory` All this takes place on a Parallels (virtual machine) Debian installation. Any ideas on how to proceed in order to fully restore the system? Best Regards ps. /mnt/gentoo/bin/bash works fine if I execute it. All files and permissions are in place rdiff-backup seems to work just fine. However the system cannot neither boot (exits with kernel panic - cannot find init) or be chrooted.

    Read the article

  • Where does truecrypt store the backup volume header?

    - by happygolucky
    When using WDE, where does (if anywhere) truecrypt store a backup volume header? As i know there is always a backup header for regular truecrypt volumes, however i am not sure if this applies when system encryption is used. Because if i damage the volume header in track 0, my password won't boot my system anymore. So there is no backup header on the drive? I read somewhere on a forum that truecrypt might have a backup header relative to some position from the END of the HDD, however this doesn't make sense as it could easily be wiped over by programs running in Windows. And how would truecrypt know where this backup is anyway?

    Read the article

  • How do I copy only the values and not the references from a Python list?

    - by wrongusername
    Specifically, I want to create a backup of a list, then make some changes to that list, append all the changes to a third list, but then reset the first list with the backup before making further changes, etc, until I'm finished making changes and want to copy back all the content in the third list to the first one. Unfortunately, it seems that whenever I make changes to the first list in another function, the backup gets changed also. Using original = backup didn't work too well; nor did using def setEqual(restore, backup): restore = [] for number in backup: restore.append(number) solve my problem; even though I successfully restored the list from the backup, the backup nevertheless changed whenever I changed the original list. How would I go about solving this problem?

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Are non-modified FILESTREAM files excluded from DIFFERENTIAL backups?

    - by TiborKaraszi
    Short answer seems to be "yes". I got this from a forum post today, so I thought I'd test it out. Basically, the discussion is whether we can somehow cut down backup sizes for filestream data (assumption is that filestream data isn't modified very frequently). I've seen this a few times now, and often suggestions arises to somehow exclude the filestream data and fo file level backup of those files. But that would more or less leaves us with the problem of the "old" solution: potential inconsistency....(read more)

    Read the article

  • How to create a Windows like restore point using Deja Dup ?

    - by Curious Apprentice
    When I click on Deja-Dup Storage, I can see there are some Ubuntu folders,Ubuntu One, WebDav and all the Windows drive visible. If I select a windows drive for my backup is it going to backup all files including my installed softwares, muzic, movies- everything ?! When I open/closes a windows drive it mounts and unmounts. If I select that option does it going to cause any mount/unmount problem ? What is WebDav ?

    Read the article

  • How to view files backed up to Ubuntu One by Déjà Dup?

    - by irrational John
    I decided to try out the Ubuntu Backup system application on my 11.10 partition. Currently I am just using the default settings to back up folders in my Home (~) folder to the deja-dup in my Ubuntu One cloud. I have not been able to figure out how to view which files have been saved to the backup location other than by restoring them to a temp folder and then seeing what was restored. I'm hoping there is a less kludgy way to just find out what has or has not been backed up.

    Read the article

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005). Any ideas how to restore a database in this situation, which gives this error when attempting to restore? PARTIAL RESOLUTION: When I restored to a different box, running SQL 2008 Express on Windows Server 2003, all worked well. It just wouldn't work on the Windows 7 box. Not sure why. If anyone else has a similar experience, please let me know (there are many similar issues in different forums out there).

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • SQL SERVER – Database in RESTORING State for Long Time

    - by Pinal Dave
    A very interesting question I received the other day. “Our database has been in restoring stage for a long time. We have already restored all the necessary files there. After restoring the files we are expecting that  the database will be in operational mode, however, it is continuously in the restoring mode. Any suggestion?” The question is very common. I sent user follow up emails to understand what is actually going on with the user. I realized after restoring their bak files and log files their database was in the restoring state because they had not restored the latest log file with RECOVERY options. As they had completed all the database restore sequence (bak and log in order), the real need for them was to recover the database from norecovery state. User can restore log files till the database is no recovery mode. If the database is recovered it will be in operation and it can continue database operation. If the database has another operations we cannot restore further log as the chain of the log file after the database is recovered is meaningless. This is the reason why the database has to be norecovery state when it is restored. There are three different ways to recover the database. 1) Recover the database manually with following command. RESTORE DATABASE database_name WITH RECOVERY 2) Recover the database with the last log file. RESTORE LOG database_name FROM backup_device WITH RECOVERY 3) Recover the database when bak is restored RESTORE DATABASE database_name FROM backup_device WITH RECOVERY To understand how the backup restores timeline works read Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Duplicity can't connect to CloudFiles "Network is unreachable"

    - by jwandborg
    Whenever I click "Backup now" in the Backup GUI, the smaller "Back Up" window opends, and after a while I get the following error message: Traceback (most recent call last): File "/usr/bin/duplicity", line 1359, in with_tempdir(main) File "/usr/bin/duplicity", line 1342, in with_tempdir fn() File "/usr/bin/duplicity", line 1202, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py", line 942, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line 156, in get_backend return _backends[pu.scheme](pu) File "/usr/lib/python2.7/dist-packages/duplicity/backends/cloudfilesbackend.py", line 70, in __init__ self.container = conn.create_container(container) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 250, in create_container response = self.make_request('PUT', [container_name]) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 189, in make_request response = retry_request() File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 182, in retry_request self.connection.request(method, path, data, headers) File "/usr/lib/python2.7/httplib.py", line 955, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.7/httplib.py", line 989, in _send_request self.endheaders(body) File "/usr/lib/python2.7/httplib.py", line 951, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 811, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 773, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 1154, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable I use Rackspace CloudFiles as a storage backend, last backup was 3 days ago (successful. I have not changed any settings since then.

    Read the article

  • Questions about norton ghost

    - by Nrew
    I have used norton ghost to backup my hard drive. And it came up with a .v2i file. Will I be able to use this backup from my pc to my laptop? Can I use this backup to restore my dual boot pc back in shape. If the mbr is destroyed/damaged? Can I use this backup to restore my os and applications on the same machine if the machine's hard drive is reformatted?

    Read the article

  • rsnapshot preexec

    - by Zulakis
    I am mounting my remote backup volume using a rsnapshot cmd_preexec script. If the /mnt/backup directory doesn't exist when starting rsnapshot i get this error: ERROR: /mnt/backup does not exist. If the directory exists and the preexec mounting fails, it does not stop rsnapshot resulting in the backup being backed up on the completely wrong server... What should I do about this? Edit: I know that I could use a wrapper-script, but I don't want to do this..

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >