Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 57/230 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • NetBackup prefers "Scratch" tapes over dedicated tapes

    - by wfaulk
    I have a NetBackup 6.0MP7 installation running on Windows Server 2003. It functions as the only Master Server and Media Server. I swap a full set of tapes in and out every week, but leave a set of tapes with their Volume Pool set to "Scratch" in all the time. The weekly tape sets then get rotated back in after a period of time. Largely, this works fine. I seldom actually need the scratch tapes, but every once in a while, a backup will run over what I have dedicated to the task. However, one week's set of tapes consistently gets declined in favor of the scratch pool. The backup policies are the same for every week, they all have "Policy Volume Pool" set to "NetBackup", and all of the tapes for every week (beside the scratch tapes) have had their pools assigned as "NetBackup", definitely including the week that always gets ignored. That said, it doesn't ignore all of the NetBackup pool tapes for that week. It does usually write to two or three of them, but it writes to like 20 of the scratch tapes. (I haven't thought to look to see if it's always the same two or three tapes.) And this problem never seems to occur for any other week. It doesn't load the tapes and then reject them; it never seems to try to use them at all. They are not flagged as frozen. They are all active and unassigned when I swap them in. The tapes are in a Quantum PX510 tape library. The NetBackup server is attached to the library/robot via fibrechannel going through an HP-branded Brocade switch. I'm not an expert on NetBackup at all. I don't really even know where to look. Any advice on logs to look at or logging to enable or really anything at all would be appreciated. I'll keep an eye on the question and update it if anyone needs any more info to help.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • Can't log in after restoring from Time Machine

    - by Jay Conrod
    My friend uses a Macbook Pro with Snow Leopard 10.6.2. She uses both FileVault and Time Machine to preserve her data. Recently, she suffered a hard disk failure. After restoring from Time Machine using the Snow Leopard install disk, she gets the following error when logging in: You are unable to log in to the FileVault user account at this time. Logging into the account failed because an error occurred. When examining the file system through Terminal, I noticed her home directory is not present: there is no /Users/username directory, or the FileVault .sparsebundle file that's supposed to be there. When using Time Machine.app on /Users, it appears as if her home directory as never there. Additionally, I did a search on the backup disk with the following command: sudo find /Volumes/backup -name '*.sparsebundle' No results. She told me that after working with some large data files, Time Machine would come on, and it would sound like it was transferring a lot of data to the hard disk. Time Machine must have been doing something, right? How can we recover her files? Are they still there?

    Read the article

  • Duplicity on a ReadyNAS

    - by Jason Swett
    Has anyone here run Duplicity on a ReadyNAS? I'm trying but here's what I get: duplicity full --encrypt-key="ABC123" /home/jason/ scp://[email protected]//gob Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) I've also found this post that says the "Invalid SSH password" message doesn't actually mean invalid SSH password. This would make sense because I'm not using an SSH password; I'm using a public key. I can ssh, ftp, sftp and rsync into my ReadyNAS just fine. (Actually, to be more accurate, I can get past authentication with ssh, ftp and sftp but I can't actually do anything past that. Regardless, that's enough to tell me that "Invalid SSH password" is bogus. Rsync works with no problems.) The post I found says the command will work as soon as the directory at the end of your scp command exists, but I don't know how to check for that. I know the share gob exists on my ReadyNAS and I know it's writable because I'm writing to it with rsync. Also, here is the verbose output: Using archive dir: /home/jason/.cache/duplicity/3bdd353b29468311ffa8485160da6873 Using backup name: 3bdd353b29468311ffa8485160da6873 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Main action: full ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /usr/bin/duplicity full --encrypt-key=ABC123 -v9 /home/jason/ scp://[email protected]//gob Linux gob 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] ================================================================================ Using temporary directory /tmp/duplicity-cridGi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-cridGi-tempdir/mkstemp-ztuF5P-1 Temp has 86334349312 available, backup will use approx 34078720. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' (attempt #1) State = sftp, Before = '[email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) Any ideas as to what's going wrong?

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one. [closed]

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • VMware Data Recovery error -3960 and Event ID 8193 on Windows Server 2003

    - by flooooo
    I've been trying to solve this problem since a few days now without any success. What I'm trying is to make a backup of a virtual machine running Windows Server 2003 SP 2 using VMware Data Recovery 2.0.0.1861. When starting the backup task it tries to make a snapshot of the virtual machine using VSS which fails with error: Event Type: Error Event Source: VSS Event Category: None Event ID: 8193 Date: 05.06.2012 Time: 12:12:01 User: N/A Computer: LEGOLAS Description: Volume Shadow Copy Service error: Unexpected error calling routine RegSaveKeyExW. hr = 0x800703f8. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 2d 20 43 6f 64 65 3a 20 - Code: 0008: 57 52 54 52 45 47 52 43 WRTREGRC 0010: 30 30 30 30 30 33 39 36 00000396 0018: 2d 20 43 61 6c 6c 3a 20 - Call: 0020: 57 52 54 52 45 47 52 43 WRTREGRC 0028: 30 30 30 30 30 33 31 38 00000318 0030: 2d 20 50 49 44 3a 20 20 - PID: 0038: 30 30 30 30 36 34 38 38 00006488 0040: 2d 20 54 49 44 3a 20 20 - TID: 0048: 30 30 30 30 34 33 38 34 00004384 0050: 2d 20 43 4d 44 3a 20 20 - CMD: 0058: 43 3a 5c 57 49 4e 44 4f C:\WINDO 0060: 57 53 5c 53 79 73 74 65 WS\Syste 0068: 6d 33 32 5c 76 73 73 76 m32\vssv 0070: 63 2e 65 78 65 20 20 20 c.exe 0078: 2d 20 55 73 65 72 3a 20 - User: 0080: 4e 54 20 41 55 54 48 4f NT AUTHO 0088: 52 49 54 59 5c 53 59 53 RITY\SYS 0090: 54 45 4d 20 20 20 20 20 TEM 0098: 2d 20 53 69 64 3a 20 20 - Sid: 00a0: 53 2d 31 2d 35 2d 31 38 S-1-5-18 This machine was converted p2v. I have no idea where to search for the problem and what to do. Google showed a few result but none of them were useful for me. Please help me. If you need further information I'll tell you - just ask!

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • Ways to recover data from external hard drive

    - by Howard Benson
    I use an external hard disk for backup of my mac with time machine (OS 10.5.8). I have made something wrong and I have found important folders in the recycler bin. These folders come from external hd. They are backup folders (backups.backupdb) and others. I have tried to restore them draggin and dropping. Some of them came back in the external hd in a while. For the others it takes hours to "preparing to copy" and then it has said "there's no space to copy" on ext hd. It's strange. Files are now in the recycle bin (180gb), and the ext had should have lot of free space. But it isn't really so. Ext hd is not free of space even if these files are in the bin. I ask for advices. I'm not also able to use time machine now (and i have "lost" old backups) for the same reason. Ext hd says that it has not free space.. Thanks

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Backing up a Windos 7 partition from Macbook with no OS X

    - by mattcodes
    I have a 3 year macbook with Windows 7 installed as 40gb and OS X as 40gb (80gb HD). I want to remove OS X as Im at the limit of 40gb on Windows and I have not logged on to Mac OS X since installed Win7 (dont flame me). So I want to delete OS X partition and expand my win partition to 80gb BUT I still would like to be able to regularly (once a week/month) backup my Windows 7 partition - its took a while to setup everything up right - not just docs and programs - so when the hard drive dies I want to be able to restore the partition and boot away, (the daily volatile bits I can pull down from dropbox and project from soure control). With Mac OS X I could use Winclone - and this worked flawless last time the HD failed with XP but with the absence of OS X I will need something else. Im thinking can I use a Linux Live boot CD along with an external USB hard drive. Boot from CD and then dd? the partition to the USB? What linux distro live CD should I use? I say dd as if I know what am taking about (I dont) is this the best way to backup a partition (when it will be restored to same hardware as bootable) ? What command?

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Is there any method of backing up Google Drive files in some sort of versioning system?

    - by VictorKilo
    Backstory My company is utilizing Google Drive for our shared files. Each user has their own Drive account. In addition, we have a corporate Drive account which holds documents which are shared to each user. Each folder is shared to different users depending on their permissions and positions in the company. Many users are able to add files, and updated folders within this shared Drive account. This is fine. What is not fine, is when someone deletes something that they shouldn't. I have little to no way of knowing when I file is deleted wrongfully. Furthermore, anything that gets deleted goes into the trash bin of the file's creator, so I can't just restore it from the trash. Question Is there any method of backing up Google Drive files in some sort of versioning system that would allow me to revert files back to defined points in time? What i have Tried I currently have this corporate drive account synced up to my personal computer through the Google Drive application. Each night, I run a backup on the file using Windows "Backup and Restore." This allows me to at least get back files that are lost, but I a cleaner method than this. It's very possible that I may not have the very latest version of a document on my computer when the utility runs.

    Read the article

  • How can I create two partitions and clone one to the other (using Clonezilla)?

    - by johnny
    I was hoping someone could help. I want to create a "backup" partition. I want to create two partitions on my drive. One is a good install, which I want to then use clonezilla to copy the good partition to the broken/unused partition and have the restored partition boot up as usual. Example, C: goes bad. D: is a "good" copy of C. C gets a corrupt registry. I restore D to C, C will then boot up as usual. So, I need to do the clone with Clonezilla and the restore with the same. I see the part_...clone and restore. Will this do it? How do I get the partitions? EDIT: I am using XP. How can I do this? Also, I know this is not the best thing for all occasions. I have a offline backup as well. I would like to have both. Thanks for any help. I'm using Clonezilla if it matters.

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one.

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • SQL Server: Database stuck in "Restoring" state

    - by Ian Boyd
    i backed up a data: BACKUP DATABASE MyDatabase TO DISK = 'MyDatabase.bak' WITH INIT --overwrite existing And then tried to restore it: RESTORE DATABASE MyDatabase FROM DISK = 'MyDatabase.bak' WITH REPLACE --force restore over specified database And now the database is stuck in the restoring state. Some people have theorized that it's because there was no log file in the backup, and it needed to be rolled forward using: RESTORE DATABASE MyDatabase WITH RECOVERY Except that, of course, fails: Msg 4333, Level 16, State 1, Line 1 The database cannot be recovered because the log was not restored. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. And exactly what you want in a catastrophic situation is a restore that won't work. The backup contains both a data and log file: RESTORE FILELISTONLY FROM DISK = 'MyDatabase.bak' Logical Name PhysicalName ============= =============== MyDatabase C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase.mdf MyDatabase_log C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase_log.LDF

    Read the article

  • Can I get a dump of all my databases *except one* using mysqldump?

    - by Daniel Magliola
    I'm currently using mySQLdump to backup my dev machine and servers. There is one project I just started, however, that has a HUUUUUGE database that I don't really need backed up, and i'll be a big problem to add it to the rest of the backup cycle. I'm currently doing this: "c:\Program Files\mysql\MySQL Server 5.1\bin\mysqldump" -u root -pxxxxxx --all-databases g:\backups\MySQL\mysqlbackup.sql Is it possible to somehow specify "except this database(s)"? I wouldn't like to have to specify the list of DBs manually, since that would mean that I'd have to remember updating my backup batch file every time I create a new DB, and I know that's not gonna happen. EDIT: As you probably guessed from my command line above, i'm doing this on Windows, so I can't do any kind of fancy bash stuff, only wimpy .bat things. Alternatively, if you have other ideas to solve this same issue, they are more than welcome, of course! Thanks Daniel

    Read the article

  • How to use forfiles (or similar) to delete files older than n days, but always leaving most recent n

    - by Aidan Whitehall
    (Using Windows 2000 and 2003 Server) We use forfiles.exe to delete backup .zip files older than n days, and it works great (command is a bit like below) forfiles -p"C:\Backup" -m"*.zip" -c"cmd /c if @ISDIR==FALSE del \"@PATH\@FILE\"" -d-5 If a .zip file fails to be created, I'd like to ensure that we don't end up with 0 .zip files in the backup after 5 days. Therefore, the command needs to be: "delete anything older than 5 days, but ALWAYS keep the most recent 5 files, EVEN if they themselves are older than 5 days" We can use forfiles.exe or another solution (although anything that is a slick one-liner is ALWAYS preferable to a script file). Thanks!

    Read the article

  • Restore Bak File created on Windows XP pro in Windows XP Pro 64

    - by Kobojunkie
    I have a situation that I need help with. I backed up my Files on Windows XP using the System BackUp utility/wizard, and then Installed a new operating system on the machine. Now I want to restore my old files via the .bak file but it is not being recognized at all. Did I do this wrong or is there a way to still get back my old files on my new OS ? Thanks in advance!

    Read the article

  • How to restore files from an external drive done with Norton Ghost

    - by Den
    Our office XP computer crashed this afternoon. I know that we had Norton Ghost doing regular backups on an external hard drive. My question is how do you restore from that backup? PS. When I plug in the external drive into my own Windows 7 machine it has two folders: - 9c590f17be2ef53d13bb00be2a - Norton Backups I can't open the first one (it says I don't have permissions) but the second one contains a bunch of files with the .v2i extension. Any help would be appreciated?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >