Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 59/245 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Oracle backup and recovery

    - by kupa
    During recovery Oracle writes the following error: RMAN-06054: media recovery requesting unknown log: thread 1 seq 9 lowscn 4034762 I have used in mount mode this command: change archivelog all crosscheck; delete expired archivelog all; Then restore and tried to recover again:But still RMAN-06054 error.Than I wrote: run{ SET UNTIL SEQUENCE 9 THREAD 1; RESTORE DATABASE; RECOVER DATABASE; } It helped me to recover database...But after that when I do the backup and then recover the same error occurs and solution is the same... I wonder to solve this problem without SET UNTIL SEQUENCE 9 THREAD 1; maybe I should unregister this archive log from control file(I am using control file not catalog) Can you tell me how?

    Read the article

  • Configuring a backup DNS server

    - by mattyh88
    I would like to setup a backup DNS server. I have added all my name servers in my domain name panel. (ns1.domain.com, ns2.domain.com, etc ...) If a person would try to go to domain.com and the first name server would fail, will it automatically try and use ns2.domain.com? All my DNS servers have the same master zones configured. Is that the way to go? Is it that easy or am I missing something here? :)

    Read the article

  • Is it safe to delete rotated MySQL binary logs?

    - by Milan Babuškov
    I have a MySQL server with binary logging active. Once a days logs file is "rotated", i.e. MySQL seems to stop writing to it and creates and new log file. For example, I currently have these files in /var/lib/mysql -rw-rw---- 1 mysql mysql 10485760 Jun 7 09:26 ibdata1 -rw-rw---- 1 mysql mysql 5242880 Jun 7 09:26 ib_logfile0 -rw-rw---- 1 mysql mysql 5242880 Jun 2 15:20 ib_logfile1 -rw-rw---- 1 mysql mysql 1916844 Jun 6 09:20 mybinlog.000004 -rw-rw---- 1 mysql mysql 61112500 Jun 7 09:26 mybinlog.000005 -rw-rw---- 1 mysql mysql 15609789 Jun 7 13:57 mybinlog.000006 -rw-rw---- 1 mysql mysql 54 Jun 7 09:26 mybinlog.index and mybinlog.000006 is growing. Can I simply take mybinlog.000004 and mybinlog.000005, zip them up and transfer to another server, or I need to do something else before? What info is stored in mybinlog.index? Only the info about the latest binary log? UPDATE: I understand I can delete the logs with PURGE BINARY LOGS which updates mybinlog.index file. However, I need to transfer logs to another computer before deleting them (I test if backup is valid on another machine). To reduce the transfer size, I wish to bzip2 the files. What will PURGE BINARY LOGS do if log files are not "there" anymore?

    Read the article

  • synchronization of file locations between two machines

    - by intuited
    Although similar threads have been asked on this site and its siblings before, I've not managed to glean the answer to this persistent question. Any help is much appreciated. The situation: I've got two laptops; both contain a ton of music. Sometimes I move these music files to different locations, or change the metadata in them, or convert them to a different format. I might do any of these things on either machine. I rarely do all of them at once — ie it's unlikely that I'll convert a file's format and move it to a different location all in one go. I'd like to be able to synchronize these changes without having to sift through everything that was renamed or moved. I'm familiar with rsync but I find it inadequate, because although it can compute checksums, it doesn't have any way to store them. So if a file differs, it can't figure out which side it changed on. This also means that it can't attempt to match a missing file to a new one with the same checksum (ie a move) if the filesize and date are the same, it , so it takes an epoch to do a sync on a large repository. I would like to only check the checksum if the files even if you turn on checksumming, it still doesn't use it intelligently: ie it checksums files even if the sizes differ. IIRC. it's not able to use file metadata as a means of file comparison. this is sort of a wishlist item but it seems doable. I've also looked into rsnapshot, but its requirement to create a full backup is impractical in this situation. I don't need a backup, I just need a record of what file with each hash was where when. Unison seems like it might be able to do something vaguely along these lines, but I'm loathe to spend hours wading through its details only to discover that it's sadly lacking. Plus, it's fun asking questions on here. What I'd like is a tool that does something along these lines: keeps track of file checksums or of actual renames, possibly using inotify to greatly reduce resource consumption/latency stores a database containing this info, along with other pertinencies like the file format and metadata, the actual inode, the filename history, etc. uses this info to provide more-intelligent synchronization with a counterpart on the other side. So for example: if a file has been converted from flac to ogg, but kept the same base filename, or the same metadata, it should be able to send the new version over, and the other side should delete the original. Probably it should actually sequester it somewhere in case they or you screwed up, but that's a detail. And then when the transaction is done, the state is logged so that the next time the two interact they can work out their differences. Maybe all this metadata stuff is a fancy pipe dream. I would actually be pretty happy if there was something out there that could just use checksums in an intelligent way. This would be sort of like having the intelligence of something like git, minus the need to duplicate data in an index/backup/etc (and branching, and checkouts, and all the other great stuff that RCSs do. basically just fast forward commit pushes are all I want, with maybe the option to roll back.) So is there something out there that can do this? If not, can someone suggest a good way to start making it?

    Read the article

  • is there a way to have VIM do incremental search on text files?

    - by Alex
    Is there a way to have VIM do incremental search on text files? Vim already does incremental search within the currently open file. Examples of programs that demonstrate the type of search I'm trying to accomplish are Notational Velocity for MacOS, Resop for windows or SimpleNote for the web. These apps do an instant or incremental search in the files of a specific directory and make it easy/fast to narrow down the file you are looking for or create a new file. I use both but would rather live in one editor.(that being VIM) Is there some plug in that would do this?

    Read the article

  • How do you do incremental search across multiple text files in VIM?

    - by Alex
    Is there a way to have VIM do incremental search on text files? Vim already does incremental search within the currently open file. Examples of programs that demonstrate the type of search I'm trying to accomplish are Notational Velocity for MacOS, Resop for windows or SimpleNote for the web. These apps do an instant or incremental search in the files of a specific directory and make it easy/fast to narrow down the file you are looking for or create a new file. I use both but would rather live in one editor.(that being VIM) Is there some plug in that would do this?

    Read the article

  • how to include error messages into backup reports for SQL Server 2008 R2?

    - by avs099
    Right now I have daily (differential) and weekly (full) backups set on my SQL Server 2008 R2 as jobs for SQL Server Agent with email notifications if job fails. I do get emails like this: JOB RUN: 'Daily backup.Diff backup' was run on 4/11/2012 at 3:00:00 AM DURATION: 0 hours, 0 minutes, 28 seconds STATUS: Failed MESSAGES: The job failed. The Job was invoked by Schedule 9 (Daily backup.Diff backup). The last step to run was step 1 (Diff backup). but often that happens because we delete/create new databases - and diff backup fails. And the only way for me to see the actual reason is to go to Log Viewer - Maintenance Plans logs. Is it possible to include "Error Message" field from the logs into notification emails? And more generic - is it possible to change notification email templates somehow?.. Thanks you.

    Read the article

  • TSM Backup Fails with VSS

    - by user176320
    Since we rebuilt our servers to windows servers 2008 x64 r2 we have perpetual problems with scheduled backup (shadow copy). In the Application event log, the following events are logged on both nodes: The description for Event ID 4112 from source TsmVssPlugin cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: VSS processing encountered error 'VSS_E_UNEXPECTED_PROVIDER_ERROR' in the Volume Shadow Copy API 'AddToSnapshotSet'. For more information, see the IBM Tivoli Storage Manager client error log. Re installing TSM didn't change anything and still logging same events. TSM Error log file shows following: ANS1327W The snapshot operation for 'C:' failed with error code: -1. ANS5250E An unexpected error was encountered. TSM function name : BaStartSnapshot TSM function : initializeSnapshotSet() failed, check Microsoft Application event log for VSS errors TSM return code : -1

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • Symantec Backup Exec 12 Tape Alert.

    - by Adam
    Every day, I run 5 backups using 6 tapes. Each day, when I run the inventory, I get a tape alert Error. This occurs every day, on the same job. The error is: Job 'Inventory Daily ********' has reported Multiple Tape Alerts on server '******' Please refer to job log *****.xml for more information. When i look at the Job log, the Utility Job Information says: The device has reported the following TapeAlert diagnostic information Information- The library has been manually turned offline and is unavailable for use. Robotic library for device: PV132T 500 Warning - Library security has been compromised. Robotic Library for device: PV132T 500. Critical - The library has detected a inconsistency in its inventory. 1.Redo the library inventory to correct the inconsistency. 2. Restart the operation. Check the applications users manual or hardware users manual for specific instructions on redoing the library inventory. Roboric Library for Device PV132T 500. When I run the same inventory for a second time, the job completes successfully. I am using Symantec Backup Exec 12 running on Windows Server 2008. I am using a Dell Powervault 132T 500 tape drive. If anyone can help me on how to resolve this problem, it would be very much appreciated.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Stephen Jennings
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all databases, and choose a path to back up to. When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) ------------------------------ Program Location: at Microsoft.SqlServer.Management.SqlManagerUI.MaintenancePlanMenu_Run.PerformActions() I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. While searching the web for clues, the only solution I've run across so far suggests running sp_configure 'allow_update', 0. I tried this but allow_update was already set to 0 and it did not fix the problem. The Windows server and SQL Server have all updates applied to them. Thanks for any suggestions!

    Read the article

  • Windows Server 2008 R2, SQL Server 2008 R2, and Registry Backups

    - by charliedigital
    Hi folks! As a developer, I've installed various instances of SQL Sever (2000, 2005, 2008, R2) from all the ways back to 2003 and I've never had an install fail on me....until yesterday. I was installing SQL Server 2008 R2 onto a Windows Server 2008 R2 hosted virtual server and the install finished, but failed on every component. To make matters worse, it was in a state in which I could not uninstall it either, even using command line options! I dug around a bit, but didn't get very far with it. The error is enigmatic and Google didn't turn up much hope. So today, I am going to try again after having the VM image wiped overnight. My question is how can I guard against the same failure today? I don't mind if it fails, but then I know it may be something wrong with the base image I'm getting from the hosting company. I really don't feel like paying another $20 to wipe the VM and I have no idea why it failed. Is it enough for me to backup the registry so that I can restore it in case it fails? What about the installation files? Do I need to have a tool to clean that out, too? Sorry, I'm no sys admin so no real experience with backup/restore aside from System Restore! So any advice would be appreciated!

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • NetBackup prefers "Scratch" tapes over dedicated tapes

    - by wfaulk
    I have a NetBackup 6.0MP7 installation running on Windows Server 2003. It functions as the only Master Server and Media Server. I swap a full set of tapes in and out every week, but leave a set of tapes with their Volume Pool set to "Scratch" in all the time. The weekly tape sets then get rotated back in after a period of time. Largely, this works fine. I seldom actually need the scratch tapes, but every once in a while, a backup will run over what I have dedicated to the task. However, one week's set of tapes consistently gets declined in favor of the scratch pool. The backup policies are the same for every week, they all have "Policy Volume Pool" set to "NetBackup", and all of the tapes for every week (beside the scratch tapes) have had their pools assigned as "NetBackup", definitely including the week that always gets ignored. That said, it doesn't ignore all of the NetBackup pool tapes for that week. It does usually write to two or three of them, but it writes to like 20 of the scratch tapes. (I haven't thought to look to see if it's always the same two or three tapes.) And this problem never seems to occur for any other week. It doesn't load the tapes and then reject them; it never seems to try to use them at all. They are not flagged as frozen. They are all active and unassigned when I swap them in. The tapes are in a Quantum PX510 tape library. The NetBackup server is attached to the library/robot via fibrechannel going through an HP-branded Brocade switch. I'm not an expert on NetBackup at all. I don't really even know where to look. Any advice on logs to look at or logging to enable or really anything at all would be appreciated. I'll keep an eye on the question and update it if anyone needs any more info to help.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • Can't log in after restoring from Time Machine

    - by Jay Conrod
    My friend uses a Macbook Pro with Snow Leopard 10.6.2. She uses both FileVault and Time Machine to preserve her data. Recently, she suffered a hard disk failure. After restoring from Time Machine using the Snow Leopard install disk, she gets the following error when logging in: You are unable to log in to the FileVault user account at this time. Logging into the account failed because an error occurred. When examining the file system through Terminal, I noticed her home directory is not present: there is no /Users/username directory, or the FileVault .sparsebundle file that's supposed to be there. When using Time Machine.app on /Users, it appears as if her home directory as never there. Additionally, I did a search on the backup disk with the following command: sudo find /Volumes/backup -name '*.sparsebundle' No results. She told me that after working with some large data files, Time Machine would come on, and it would sound like it was transferring a lot of data to the hard disk. Time Machine must have been doing something, right? How can we recover her files? Are they still there?

    Read the article

  • Duplicity on a ReadyNAS

    - by Jason Swett
    Has anyone here run Duplicity on a ReadyNAS? I'm trying but here's what I get: duplicity full --encrypt-key="ABC123" /home/jason/ scp://[email protected]//gob Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) I've also found this post that says the "Invalid SSH password" message doesn't actually mean invalid SSH password. This would make sense because I'm not using an SSH password; I'm using a public key. I can ssh, ftp, sftp and rsync into my ReadyNAS just fine. (Actually, to be more accurate, I can get past authentication with ssh, ftp and sftp but I can't actually do anything past that. Regardless, that's enough to tell me that "Invalid SSH password" is bogus. Rsync works with no problems.) The post I found says the command will work as soon as the directory at the end of your scp command exists, but I don't know how to check for that. I know the share gob exists on my ReadyNAS and I know it's writable because I'm writing to it with rsync. Also, here is the verbose output: Using archive dir: /home/jason/.cache/duplicity/3bdd353b29468311ffa8485160da6873 Using backup name: 3bdd353b29468311ffa8485160da6873 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Main action: full ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /usr/bin/duplicity full --encrypt-key=ABC123 -v9 /home/jason/ scp://[email protected]//gob Linux gob 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] ================================================================================ Using temporary directory /tmp/duplicity-cridGi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-cridGi-tempdir/mkstemp-ztuF5P-1 Temp has 86334349312 available, backup will use approx 34078720. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' (attempt #1) State = sftp, Before = '[email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) Any ideas as to what's going wrong?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >