Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 40/245 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Use Mac Pro as time machine, server and editing station?

    - by Dan
    Background: My fiancée needs a Mac Pro for movie editing and rendering. I need a web server and a backup solution for my MacBook Pro. Idea We thought we could split the costs of the Mac Pro and set it up to act as both a web server and a backup device. Question Is this a good idea? Specifically: Is it easy to set it up to incrementally backup one or several laptops over wifi? And what software would you recommend? Is it silent and stable enough to run a web server continuously? Will it manage all this, including simultaneous editing? Thanks.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • SQL SERVER – Database in RESTORING State for Long Time

    - by Pinal Dave
    A very interesting question I received the other day. “Our database has been in restoring stage for a long time. We have already restored all the necessary files there. After restoring the files we are expecting that  the database will be in operational mode, however, it is continuously in the restoring mode. Any suggestion?” The question is very common. I sent user follow up emails to understand what is actually going on with the user. I realized after restoring their bak files and log files their database was in the restoring state because they had not restored the latest log file with RECOVERY options. As they had completed all the database restore sequence (bak and log in order), the real need for them was to recover the database from norecovery state. User can restore log files till the database is no recovery mode. If the database is recovered it will be in operation and it can continue database operation. If the database has another operations we cannot restore further log as the chain of the log file after the database is recovered is meaningless. This is the reason why the database has to be norecovery state when it is restored. There are three different ways to recover the database. 1) Recover the database manually with following command. RESTORE DATABASE database_name WITH RECOVERY 2) Recover the database with the last log file. RESTORE LOG database_name FROM backup_device WITH RECOVERY 3) Recover the database when bak is restored RESTORE DATABASE database_name FROM backup_device WITH RECOVERY To understand how the backup restores timeline works read Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Duplicity can't connect to CloudFiles "Network is unreachable"

    - by jwandborg
    Whenever I click "Backup now" in the Backup GUI, the smaller "Back Up" window opends, and after a while I get the following error message: Traceback (most recent call last): File "/usr/bin/duplicity", line 1359, in with_tempdir(main) File "/usr/bin/duplicity", line 1342, in with_tempdir fn() File "/usr/bin/duplicity", line 1202, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py", line 942, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line 156, in get_backend return _backends[pu.scheme](pu) File "/usr/lib/python2.7/dist-packages/duplicity/backends/cloudfilesbackend.py", line 70, in __init__ self.container = conn.create_container(container) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 250, in create_container response = self.make_request('PUT', [container_name]) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 189, in make_request response = retry_request() File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 182, in retry_request self.connection.request(method, path, data, headers) File "/usr/lib/python2.7/httplib.py", line 955, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.7/httplib.py", line 989, in _send_request self.endheaders(body) File "/usr/lib/python2.7/httplib.py", line 951, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 811, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 773, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 1154, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable I use Rackspace CloudFiles as a storage backend, last backup was 3 days ago (successful. I have not changed any settings since then.

    Read the article

  • Questions about norton ghost

    - by Nrew
    I have used norton ghost to backup my hard drive. And it came up with a .v2i file. Will I be able to use this backup from my pc to my laptop? Can I use this backup to restore my dual boot pc back in shape. If the mbr is destroyed/damaged? Can I use this backup to restore my os and applications on the same machine if the machine's hard drive is reformatted?

    Read the article

  • rsnapshot preexec

    - by Zulakis
    I am mounting my remote backup volume using a rsnapshot cmd_preexec script. If the /mnt/backup directory doesn't exist when starting rsnapshot i get this error: ERROR: /mnt/backup does not exist. If the directory exists and the preexec mounting fails, it does not stop rsnapshot resulting in the backup being backed up on the completely wrong server... What should I do about this? Edit: I know that I could use a wrapper-script, but I don't want to do this..

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Backing up a 22 GB MySQL database daily

    - by unknown (yahoo)
    Right now I am able to do the backup using mysqldump. But I have to take down the web server AND it takes around 5 minutes to do the backup. If I don't take down the web server, it takes forever and never finishes + the website becomes inaccessible during the backup. Is there a quicker/better way to backup my 22 GB and growing database? All the tables are MyISAM.

    Read the article

  • How to backup database dumps which only minor changes between the backups?

    - by wurlog
    I have a mysql dumpfile every night and i every one of them. Each is 14 GB in size, so my backupdrive will be full soon. The difference from one day to another are only some 100MB. How make the daily backups without wasting a lot of space. PS: i used tar to compress 1 file and the size went down to 5GB. I hoped that when i compress two files the ratio would be better, but no. 2 dumps compressed are 10Gb

    Read the article

  • Is MozyHome Remote Backup known to slow down Windows Explorer?

    - by Kyle Estes
    Windows Explorer on my home machine is uber slow, and I suspect it is Mozy's fault. In the past Mozy has done some goofy stuff like placing file and folder icon overlays in Windows Explorer, which was really bugging me because I thought they were from TortoiseSVN (and I had uninstalled that!). Anyhow, does anyone else have Mozy installed? Is Windows Explorer really slow respond for you, say when you simply double click the hard drive to browse files? I'd also welcome any tips people might have on debugging why Windows Explorer might be slow (shell extensions known to cause problems, etc.).

    Read the article

  • Software to run a mozy/carbonite "online backup" server?

    - by chris
    I am interested in creating a server that is similar to the mozy/carbonite, but I want to run it myself. So I will need server software of some sort (we run ubuntu), and client software that runs in the background and uploads data from specified folders as it is changed. Basically i want to run the equivalent of mozy or carbonite for an internal corporate network. Clients are all WinXP

    Read the article

  • Is it reasonable to use my Time Machine backup to migrate to a new primary hard drive?

    - by Michael Haren
    I'm planning to upgrade my MacBook's harddrive. I already use Time Machine to back up the system to an external drive. Is it reasonable to use Time Machine to restore my system to the new laptop drive, once I install it? I mean, a restore like this really ought to be fine, right? That's the point of it, after all! I know imaging the drive would be more appropriate but this plan seems a whole lot easier (albeit probably slower), with practically no risk since my original drive won't be involved. A second question would then be, are there any considerations to be made when doing a Time Machine restore?

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Backup database from default (unnamed) sql server instance with powershell.

    - by sparks
    Trying to connect to an instance of SQL Server 2008 on a server we'll call Sputnik. There are no firewalls in between the two devices. Right now I'm just trying to list databases [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null $servername = "Sputnik" $remoteServer = New-Object("Microsoft.SqlServer.Management.Smo.Server") $servername $remoteServer.databases The following error message occurs: The following exception was thrown when trying to enumerate the collection: "Failed to connect to server Sputnik.". At line:1 char:15 + $remoteServer. <<<< databases + CategoryInfo : NotSpecified: (:) [], ExtendedTypeSystemException + FullyQualifiedErrorId : ExceptionInGetEnumerator

    Read the article

  • What do YOU use for your Windows Server backup solution?

    - by Justin Higgins
    I know there are many choices, from builtin ntbackup, to FOSS, to commercial to custom script, but I was wondering what YOUR choice is for your servers and environment and a brief explanation of why. As each environment and requirements differ, there is no one right solution, except that you should be backing up and verifying restoration regularly. As a bonus, do you use D2D2D? D2D2T? D2D2Cloud?

    Read the article

  • is ROBOCOPY a good form of backup as a tool?

    - by dasko
    seeing how it does not do a write verify unless scripted (or is it needed?) is it a decent option to dump a few folders to another server? i am just worried about whether the data, after being copied, might be corrupted but you would not be aware of it being buggy? i have used it in the past without issue but am seeking feedback in case i missed something obvious.

    Read the article

  • Truecrypt system partion partially encryted but then drive corrupted and won't boot - decrypting so I can access files to backup?

    - by Dr.Seuss
    So I was attempting to encrypt my (Windows 7) system drive with Truecrypt and it stopped at around 15% and said that there was a segment error and that it could not proceed until it was fixed. So, I restarted the computer and ran HDD Regenerator which subsequently fixed the bad sectors on the drive, but now my system cannot boot. So, I run a number of recovery disks to no avail (Windows repair is unable to fix) and the drive won't mount on a linux version run from a CD because the drive is encrypted. So I tried mounting the drive using Truecrypt under the Linux distribution on the disk and selected "Mount partition using system encryption without pre-boot authentication" so I can decrypt, but I get an error message about it only being possible once the entire system is encrypted. How do I get out of this mess? I need to be able to back up the data that's on that partially encrypted drive so I can reinstall my operating system.

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >