Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 63/245 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • How to ensure dbs are all in sync when restored?

    - by blade
    Hi, In large server environments, how do you handle the issue of backing up SQL Server dbs which may not be in sync with other dbs they rely on? So if I back up DB1 from a server, and it uses another db which is not backed up, doing a restore when the dbs are in differing state could cause problems? It seems like all dependent DBs should be backed up, regardless of size etc, but in my current job (where we're a datacentre company and I'm a .NET Developer), I only backup some of several dependent DBs on a SQL Server instance. Thanks

    Read the article

  • SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

    - by pinaldave
    Backup is the most important task for any database admin. Your data is at risk if you are not performing database backup. Honestly, I have seen many DBAs who know how to take backups but do not know how to restore it. (Sigh!) In this blog post we are going to discuss about one of my real experiences with one of my clients – BACKUPIO. When I started to deal with it, I really had no idea how to fix the issue. However, after fixing it at two places, I think I know why this is happening but at the same time, I am not sure the fix is the best solution. The reality is that the fix is not a solution but a workaround (which is not optimal, but get your things done). From Book On-Line: BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPBUFFER Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPIO and BACKUPBUFFER Explanation: This wait stats will occur when you are taking the backup on the tape or any other extremely slow backup system. Reducing BACKUPIO and BACKUPBUFFER wait: In my recent consultancy, backup on tape was very slow probably because the tape system was very old. During the time when I explained this wait type reason in the consultancy, the owners immediately decided to replace the tape drive with an alternate system. They had a small SAN enclosure not being used on side, which they decided to re-purpose. After a week, I had received an email from their DBA, saying that the wait stats have reduced drastically. At another location, my client was using a third party tool (please don’t ask me the name of the tool) to take backup. This tool was compressing the backup along with taking backup. I have had a very good experience with this tool almost all the time except this one sparse experience. When I tried to take backup using the native SQL Server compressed backup, there was a very small value on this wait type and the backup was much faster. However, when I attempted with the third party backup tool, this value was very high again and was taking much more time. The third party tool had many other features but the client was not using these features. We end up using the native SQL Server Compressed backup and it worked very well. If I get to see this higher in my future consultancy, I will try to understand this wait type much more in detail and so probably I would able to come to some solid solution. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Iterative and Incremental Principle Series 4: Iteration Planning – (a.k.a What should I do today?)

    - by llowitz
    Welcome back to the fourth of a five part series on applying the Iteration and Incremental principle.  During the last segment, we discussed how the Implementation Plan includes the number of the iterations for a project, but not the specifics about what will occur during each iteration.  Today, we will explore Iteration Planning and discuss how and when to plan your iterations. As mentioned yesterday, OUM prescribes initially planning your project approach at a high level by creating an Implementation Plan.  As the project moves through the lifecycle, the plan is progressively refined.  Specifically, the details of each iteration is planned prior to the iteration start. The Iteration Plan starts by identifying the iteration goal.  An example of an iteration goal during the OUM Elaboration Phase may be to complete the RD.140.2 Create Requirements Specification for a specific set of requirements.  Another project may determine that their iteration goal is to focus on a smaller set of requirements, but to complete both the RD.140.2 Create Requirements Specification and the AN.100.1 Prepare Analysis Specification.  In an OUM project, the Iteration Plan needs to identify both the iteration goal – how far along the implementation lifecycle you plan to be, and the scope of work for the iteration.  Since each iteration typically ranges from 2 weeks to 6 weeks, it is important to identify a scope of work that is achievable, yet challenging, given the iteration goal and timeframe.  OUM provides specific guidelines and techniques to help prioritize the scope of work based on criteria such as risk, complexity, customer priority and dependency.  In OUM, this prioritization helps focus early iterations on the high risk, architecturally significant items helping to mitigate overall project risk.  Central to the prioritization is the MoSCoW (Must Have, Should Have, Could Have, and Won’t Have) list.   The result of the MoSCoW prioritization is an Iteration Group.  This is a scope of work to be worked on as a group during one or more iterations.  As I mentioned during yesterday’s blog, it is pointless to plan my daily exercise in advance since several factors, including the weather, influence what exercise I perform each day.  Therefore, every morning I perform Iteration Planning.   My “Iteration Plan” includes the type of exercise for the day (run, bike, elliptical), whether I will exercise outside or at the gym, and how many interval sets I plan to complete.    I use several factors to prioritize the type of exercise that I perform each day.  Since running outside is my highest priority, I try to complete it early in the week to minimize the risk of not meeting my overall goal of doing it twice each week.  Regardless of the specific exercise I select, I follow the guidelines in my Implementation Plan by applying the 6-minute interval sets.  Just as in OUM, the iteration goal should be in context of the overall Implementation Plan, and the iteration goal should move the project closer to achieving the phase milestone goals. Having an Implementation Plan details the strategy of what I plan to do and keeps me on track, while the Iteration Plan affords me the flexibility to juggle what I do each day based on external influences thus maximizing my overall success. Tomorrow I’ll conclude the series on applying the Iterative and Incremental approach by discussing how to manage the iteration duration and highlighting some benefits of applying this principle.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • How do I copy files between harddrives on Ubuntu CLI?

    - by ed209
    I have a dedicated server with a 120gb main ssd. The server happens to come with a couple of 3000GB hard drives. I'd like to use them to back up my main drive. Preferably, I'd like one as an exact copy of the main SSD and the other with incremental backups of the mysql database and a user uploads file. These are the drives I have Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2e18 Device Boot Start End Blocks Id System /dev/sda1 2048 4196352 2097152+ 83 Linux /dev/sda2 4198400 5246976 524288+ 83 Linux /dev/sda3 5249024 234441647 114596312 83 Linux Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table The first problem I have, is that I have no idea how to copy from one drive to another. Kind of embarrassing I know, but I don't know where to start. I'm thinking of this in terms of Mac OS cli where I'm able to copy between /Volumes - is there an equivalent? (there is nothing under /mnt or /media)

    Read the article

  • Conditional action based on whether any file in a directory has a ctime newer than X

    - by jberryman
    I would like to run a backup job on a directory tree from a bash script if any of the files have been modified in the last 30 minutes. I think I can hack together something using find with the -ctime flag, but I'm sure there is a standard way to examine a directory for changes. I know that I can inspect the ctime of the top level directory to see if files were added, but I need to be able to see changes also. FWIW, I am using duplicity to backup directories to S3.

    Read the article

  • How do I dynamically reference incremented properties in C#?

    - by Jeff Blankenburg
    I have properties called reel1, reel2, reel3, and reel4. How can I dynamically reference these properties by just passing an integer (1-4) to my method? Specifically, I am looking for how to get an object reference without knowing the name of the object. In Javascript, I would do: temp = eval("reel" + tempInt); and temp would be equal to reel1, the object. Can't seem to figure this simple concept out in C#.

    Read the article

  • Java - Incrementing through IP addresses in String format

    - by Matt
    I'm new to java and i'm trying to find a way of incrementing through an user input IP address range. For example from 192.168.0.1 to 192.168.0.255. However the way my application works at the moment is the take the from and to ip addresses as a String. Is there a way I can increment through all the ip addresses the user input from and to? Hope this makes sense and please dont flame me, I have looked for an answer!

    Read the article

  • s3fs Input/output error

    - by shadow_of__soul
    i'm trying to set up a backup system with s3fs and the amazon s3 service. i followed this 2 guides: http://qugstart.com/blog/linux/how-to-mount-an-amazon-s3-bucket-as-virtual-drive-on-centos-5-2/ http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ anyway making a tail to the /var/log/messages i get: Aug 28 13:37:46 server s3fs:###response=403 i already tried creating the authentication file on /etc/passwd-s3fs and setting there the access and private key, passing it trough the command line, i checked several times the credentials and i used it with s3fox, and is working. i also have set the time of the machine (with the date command) to be the same as the amazon S3 servers (i got the time of the S3 server uploading a file with the file manager) not only rsync don't work, commands like ls or cp in the /mnt/s3 didn't work also. any help of how i can solve/debug this? Regards, Shadow.

    Read the article

  • Remotely sync Time Machine drives

    - by Off Rhoden
    I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data. I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup). I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve. Thanks.

    Read the article

  • DPM 2010 RC Mailbox Recovery Fails

    - by ITGuy24
    I am testing DPM 2010 with Exchange 2010. I am attempting to restore a single mailbox from a previous backup to a Recovery Database. I created and mounting the Recovery Database on the Exchange 2010 server and set the overwrite property. When I run the restore for the mailbox and point it to the recovery databse I get the following error. The recovery jobs for Exchange Mailbox Database MailboxDatabase01 that started at with the destination of EXCHANGE2010.domain.com, have completed. Most or all jobs failed to recover the requested data. (ID 3111) DPM encountered an error while performaing an operation for E:\DatabaseFiles\MailboxDatabase01.edb on EXCHANGE2010.domain.com (ID 2033 Details: The process cannot access the file because it is being used by another process (0x80070020)) MailboxDatabase01 is one of our MDBs and not the RDB I setup for the recovery. I am confused why it is even trying to access this as I have triple checked that the recovery is pointed to the RDB. Any idea what I am doing wrong?

    Read the article

  • Centos livecd of current installation

    - by mplacona
    I'm trying to create a liveCD of my current Centos installation. i want it to be almost like a backup, so whenever I want to copy my current installation to another computer I would simply install from my custom liveCD. I know this is possible, and found some resources in the nets, but they all seem to only create a minimal version of CENTOS, and I'm wanting to have all the current functionalities available to me at present, including all of my development functionalities, Apache and samba settings, etc. I have done (on Debian though) it a few years ago, but can't remember how. could anyone please shed me a light on this? Thanks in advance

    Read the article

  • Need to make a scheduled task run as another user but keep the current user’s environment

    - by Chad Marmon
    I need to backup users .pst files. The current method I am trying is making a shadow copy using Diskshadow. My script works great all but Diskshadow needs to be ran as administrator but also needs to retain the logged-on user's environment variables; specifically, the %USERNAME% and %HOMESHARE% variables so the right user’s files get copied up to the right network location. I have for the most part got this to work), but there’s no straightforward (or secure, at least) way to pass the password. If I set up a scheduled task to run the script as a domain user with local admin privs, the environment variables get lost. I need to run this script automagically so that there should be no user interaction. If I could figure out how to make a scheduled task run as another user but keep the current user’s environment, I think this would work, but I’ve been beating my head against that for a while now, without any luck.

    Read the article

  • Moving files fails due to privileges but they seem to be OK

    - by joaoc
    I am trying to copy old files from an OS X 10.5.8 to a new external HD. When trying to copy a folder I get the message: The operation cannot be completed because you do not have sufficient privileges for some of the items. I've checked the privileges and they seem ok (read & write for me). What is curious is that the folder is created empty on the target drive and I can then copy the contents from inside the original folder to inside the new folder without changing anything else. This happens with several folders but not all and is making backup a pain. I have to figure out which folder broke the copy, copy its contents to the external disk and then select what hasn't been copied to copy (and eventually stop at some point and repeating the experience)

    Read the article

  • Is it possible to stream input into RAR

    - by Dscoduc
    I'm using RARLABS RAR.exe to archive/backup my server data. I am familiar with using RAR for creating an archive and adding files from a folder, but what about streaming data directly into an archive? For example, when backing up my MySQL databases I use the mysqldump command that includes a pipe command into a text file. It would be nice to skip the file step and go directly into an archive file using something like the following syntax: mysqldump -uUserName -pPassword --all-databases > rar.exe newarchivename.rar Does anyone know if what I have described, or something similar, is even possible?

    Read the article

  • Mysqldump causes "Too many connections"

    - by vbachev
    A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server. We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups. How can we avoid downtime during backups?Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?Is there an alternative to backing up with mysqldump?

    Read the article

  • Can DPM 2007 back up Active Directory?

    - by rbeier
    We're installing Microsoft Data Protection Manager 2007 - we'll be using it to back up Exchange and SQL Server among other things. Does anyone know if DPM can also back up Active Directory? It sounds like the answer is "not really". You can install the DPM agent on a domain controller and make system state backups. But if your Active Directory is out of commission, there will be no way to restore the backups, since DPM depends on AD. Currently we're just using Windows Backup (ntbackup) to take system state backups on one of the DCs. Should we just continue with that? Thanks, Richard

    Read the article

  • Ultrium 3 tape drive shoe-shining, 3Mb/s: and it's not the cable

    - by mowsala
    I have a HP 960 Ultrium 3 tape drive. Since I got it, (second hand, £90) I've been experiencing shoe-shining. Writing with tar in Linux, I average about 3Mb/s write speed. I've tried replacing both the SCSI card and the cable now, both of which made no difference at all. A curiuos observation I have made is that the write rate is not consistent. Sometimes it will write for over a minute without shoeshining, but more often, just a few seconds. I've also tried several tapes, different source drives, and even writing from Windows Backup, to no avail.

    Read the article

  • VSS Not Creating Shadow Set

    - by Jeff Leyser
    I'm trying to setup backup scripts on WinXP to use Volume Shadow Sets. I downloaded the VSS 7.2 SDK from MSFT, and used the include vshadow.exe to create a shadow set: vshadow -script=vss-setvar.cmd f: (note that I've tried both f: and c:) vshadow executes just find, giving no errors, reporting the shadow is created. However, executing vshadow -q as the very next command results in "There are no shadows on the system" and, indeed, if I use dosdev to try and map the Shadow set named in vss-setvar.cmd, it will not work. Am I missing a step?

    Read the article

  • Is there a way to import email from the raw email files?

    - by Chris Schmitz
    I have a client who recently switched hosts. When they switched hosts they didn't backup their email and updated their configuration settings so they lost everything. However, I was able to log in to their old hosting control panel and download their mail folder. I am wondering if there is a way to extract their emails and/or contacts from the files. I'm not sure what type of files they are, there is no extension, but the folder directory is structured like this: mail/ .Drafts/ .Sent/ .Trash/ cur/ new/ theirdomain.com/ tmp/ [email protected] maildir Inside of the theirdomain.com folder, there is a folder for each account and inside of that is a folder called "cur" which has a whole bunch of files with names like 1292945327.H169813P25958.uscentral21.myserverhosts.com,S=10117/2,S and if I preview those files I can see the actual email messages inside of them but I have no idea how to get that information from those files to an email client. Anyone know of a way to work with these files? Thanks in advance for any insight you can share!

    Read the article

  • Tape Supplier in Canada

    - by JohnyD
    I'm not sure if it's kosher to ask.. but where can one purchase good quality / well priced backup tapes within Canada, online? I'm asking because my current vendor, DataWrite, has sent me bad tapes for the 3rd time in a row. I have no idea where they get their product but when I put an order in for 20 tapes and 2 cleaning tapes and I have to call them up 4 times over the next 2 weeks, I expect those tapes to work. Well, like I said, this is the 3rd batch that I'm sending back and we've been dealing with them exclusively for well over 5 years so I thought I'd ask the pool of knowledge. I'm currently looking for DAT72 tapes but soon will be looking for LTO-3 and LTO-4. Thank you

    Read the article

  • Are whole VM images backed up on Amazon EC2/S3?

    - by John
    I've been trying to get my head around Amazon Web Services as a VPS provider. My understanding is a EC2 instance running Windows is basically a Windows VM, very similar to renting a VPS from a more traditional hosting provider. I don't want to have complex backups, either to administer or to restore - if my restore involves installing SVN, MySQL, Jira, etc on a new box before I can even try to restore the backup then it's not great to me. What I really want is a service which backs up my entire VM... if the PC running the VPS dies then the VM image is installed on a new PC and off we go again. With Amazon being all about flexibility and elasticity, I wondered if they have this service? I can't figure it out from reading their docs.

    Read the article

  • Best way to Duplicate a Laptop's Hard Drive One-to-One

    - by Urda
    I have a Lenovo X61 Tablet computer, with a plain SATA drive inside. I have windows 7 and Ubuntu 9.10 dual booting on the computer. I want to back up both of these OS's, and their special partitions (Windows 7 has one, and of course the Linux Swap). I want a one-to-one backup, all of my mission critical data is already backed up, but I would like to get a snapshot, and store it on a larger file server at home for quick recovery. What is the best approach to do this?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >