Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 96/245 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Designing & Maintaining SQL Server Transactional Replication Environments

    Microsoft IT protects against unplanned Transactional Replication outages and issues by using best practices and proactive monitoring. This results in increased stability, simplified management and improved performance of transactional replication environments. New! SQL Backup Pro 7.2 - easy, automated backup and restoresTry out the latest features and get faster, smaller, verified backups. Download a free trial.

    Read the article

  • Git, auto updating, security and tampering?

    - by acidzombie24
    I was thinking about hosting my private project on my server (i may use 'gitolite') and have a copy on my local machine as backup (git clone then automated git fetch every few minute). I want to know what happens if there is a bug gitolite or somewhere else on my server and the source code and git repository has been tampered with? Will my backup also be corrupted? will i easily be able to revert the source using the history?

    Read the article

  • Steps to Rename a Subscriber Database for SQL Server Transactional Replication

    I have transactional replication configured in production. The business team has a requirement to rename the subscription database. Is it possible to rename the subscription database and ensure that transactional replication will continue to function as before? If so, how could we achieve this? Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Renaming a Published SQL Server Database

    I have transactional replication configured in production. I am wondering if we could rename the publication database in transactional replication without having to drop and recreate the replication set up. Also, is it possible to rename the database files of the publication database without affecting the replication configuration. Get Smart with SQL Backup Pro Powerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school Discover why.

    Read the article

  • Bin Packing Problems: The SQL

    The 'bin packing' problem isn't just a fascination for computer scientists, but comes up in a whole range of real-world applications. It isn't that easy to come up with a practical, set oriented solution in SQL that gives a near-optimal result. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • SQL Server AlwaysOn - Part 2 - Availability Groups Setup

    SQL Server has produced some excellent High Availability options, but I was looking for an option that would allow me to access my secondary database without it being read-only or in restoring mode. I need the ability to see transactions occur and query the secondary database. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Partitioned Tables, Indexes and Execution Plans: a Cautionary Tale

    Table partitioning is a blessing in that it makes large tables that have varying access patterns more scalable and manageable, but it is a mixed blessing. It is important to understand the down-side before using table partitioning. "SQL Backup Pro 7 improves on an already wonderful product" - Don KolendaHave you tried version 7 yet? Get faster, smaller, fully verified backups. Download a free trial of SQL Backup Pro 7.

    Read the article

  • SQL Azure - Creating backups and copies of your databases

    As a DBA you always followed a practice to back up your database (or take a snapshot of your database) before making any changes so that you can revert to your old database state if something goes wrong. Also to setup a development or test environment you use a backup of your database and restore it in the respective environment. If you are moving to SQL Azure, what would you do in these cases as backup / restore and database snapshots are not supported as of now?

    Read the article

  • The PoSh DBA: Solutions using PowerShell and SQL Server

    PowerShell is worth using when it is the quickest way to providing a solution. For the DBA, it is much more than getting information from SQL Server instances via PowerShell; it can also be run from SQL Server as part of a system that helps with administrative and monitoring tasks. New! SQL Backup Pro 7.2 - easy, automated backup and restoresTry out the latest features and get faster, smaller, verified backups. Download a free trial.

    Read the article

  • Simplified Restores with SQL Server 2012 Recovery Advisor

    Occasionally, a DBA may need to restore a database from a multiple backup files that originated from multiple servers. This requirement might arise, for example, in a database-mirroring configuration, where backups may be from either of the servers. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • Correcting owner/permissions on damaged directory tree in linux

    - by mcs130
    I inadvertently made a backup copy of a directory recursively and forgot the -a (--preserve) switch when doing so. This damaged my backup directory (which contains data we need to access). The directory and all of its child folders and files comprise an installation of an application including postgress DB and solr files. The original copy was used to for a failed re-config attempt. Now I need to use the backup copy to start over, only the ownership of the backup copy is now root across everything and it is no longer usable (processes won't run due to ownership problems I created when I forgot the -a on the cp -r). I've re-installed a clean copy of the application into a 3rd location now (which has the correct owner/perms) and need to copy the owner/perms from this good directory over onto the damaged directory. What is the best way (if even possible) to do this. (I've Googled and seen things from perl scripting to setfacl/getfacl to do this but am unfortunately still confused). Apologies if this seems a dumb question. Thanks.

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Robocopy failure with Windows Server 2008 Scheduled Task

    - by CC
    So I have a batch script for robocopy. Running this from the command line does exactly what I want. robocopy "D:\SQL Backup" \\server1\Backup$\daily /mir /s /copyall /log:\\lmcrfs4g\NavBackup$\robocopyLog.txt /np Then I create a Scheduled Task in Windows Server 2008. If I set up the task to use my Domain Admin account, great. But I'm trying to get it to run as a separate domain account for Scheduled Tasks. If I use that account, folders get created, but files aren't copied. I get the following error: 2011/02/17 15:41:48 ERROR 1307 (0x0000051B) Copying NTFS Security to Destination Directory D:\SQL Backup\folder\ This security ID may not be assigned as the owner of this object. I've verified my domain\Scheduled Tasks account has Full Control NTFS permissions on both the source and destination, and the Full Control Sharing on my hidden \server1\backup$ share. Just for giggles, I've tried adding the domain account to the local Administrators group on both servers. This works fine, but that seems like a lot of privileges just to copy files. Any ideas on what I'm missing?

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • tar Cannot stat: No such file or directory

    - by VVP
    Hi all, I have got this error in during my mail server backup: 2010-09-16 06:24:20 ERROR backup of /var/mail/vhosts failed: tar: Removing leading `/' from member names tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588471.Vfd00I16e0223M187263.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284587441.Vfd00I16e0220M85965.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588863.Vfd00I16e0225M370937.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284602404.Vfd00I16e022aM416444.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284594551.Vfd00I16e0228M678444.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588944.Vfd00I16e0226M622591.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284587271.Vfd00I16e021fM96119.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284599458.Vfd00I16e0229M181400.server.host-name\:2,: Cannot stat: No such file or directory tar: Error exit delayed from previous errors Is it happened because user deleted his messages? Is there any way how to prevent this? Well I am assuming it can be happened not only with e-mail backup. Can I rely on tar & gzip as a mail backup system?

    Read the article

  • HOW TO save Contacts from iPhone to Computer?

    - by goodm
    Step 1: Download [b]Tansee iPhone Transfer Contact[/b] free trial version [url="http://softseeking.com/prodail.aspx?proid=47"]here[/url],then install the software (skip if done yet). u can download at:[url="http://www.softseeking.com/prodail.aspx?proid=47"]http://www.softseeking.com/prodail.aspx?proid=47[/url] Step 2: Connect iPhone to your computer. Step 3: Run Tansee iPhone Transfer Contact , your contacts on iPhone memory will display as shown in your iPhone screen automatically as fig 1. Click on single name, all his or her information will display as fig 2 shown. Step 4-a: In fig 1 situation, you can click button "Copy" to copy all contacts from your iPhone to your computer , then select options: 1: Choose File Type: backup to TXT file, ANTC file or CSV file; 2: Choose File Path: You can change the backup path if you do not use default path. 3: Advanced Option: if you choose ANTC format in step 1, you can add a password to protect the file. Note: We do not know the password, so please do remember it. Click OK Button to finish the Copy. See fig 3. Note: You can only copy the first 5 contacts with trail version.[/SIZE] [/SIZE] [size=3][size=3]Step 4-b: In fig 2 situation, click button "Copy Contact From who" to copy contact of a single person, select options: 1: Choose File Type: Backup contacts to TXT file, and CSV file in single contact transfer; 2: Choose File Path: You can change the backup path if you do not use default path; 3: Advanced Option: Disabled in single contact transfer. Click OK Button to finish the Copy. See fig 4. Note: You can only copy the first 5 contacts in trail version.[/size] [/size]

    Read the article

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >