Search Results

Search found 5779 results on 232 pages for 'backup restoration'.

Page 43/232 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • What does backup procedures and troubleshooting guidelines mean for a system

    - by Podolski
    I am writing the documentation for a piece of software which I have made but I don't understand what it means in some aspects. It asks me to write about backup procedures but what exactly does this mean? Does this mean like backing up the database on another hosting service or something else entirely? I am dumbfounded by what troubleshooting guidelines are as well. If you have any idea what this could mean feel free to give your insight even if you aren't 100% sure in case it could spark what it means inside of me. Thanks.

    Read the article

  • How to Identify and Backup the Latest SQL Server Database in a Series

    I have to support a third party application that periodically creates a new database on the fly. This obviously causes issues with our backup mechanisms. The databases have a particular pattern for naming, so I can identify the set of databases, however, I need to make sure I'm always backing up the newest one. Read this tip to ensure you are backing up your latest database in a series. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I backup and restore the system clipboard in C#?

    - by gtaborga
    Hey everyone, I will do my best to explain in detail what I'm trying to achieve. I'm using C# with IntPtr window handles to perform a CTRL-C copy operation on an external application from my own C# application. I had to do this because there was no way of accessing the text directly using GET_TEXT. I'm then using the text content of that copy within my application. The problem here is that I have now overwritten the clipboard. What I would like to be able to do is: Backup the original contents of the clipboard which could have been set by any application other than my own. Then perform the copy and store the value into my application. Then restore the original contents of the clipboard so that the user still has access to his/her original clipboard data. This is the code I have tried so far: private void GetClipboardText() { text = ""; IDataObject backupClipboad = Clipboard.GetDataObject(); KeyboardInput input = new KeyboardInput(this); input.Copy(dialogHandle); // Performs a CTRL-C (copy) operation IDataObject clipboard = Clipboard.GetDataObject(); if (clipboard.GetDataPresent(DataFormats.Text)) { // Retrieves the text from the clipboard text = clipboard.GetData(DataFormats.Text) as string; } if (backupClipboad != null) { Clipboard.SetDataObject(backupClipboad, true); // throws exception } } I am using the System.Windows.Clipboard and not the System.Windows.Forms.Clipboard. The reason for this was that when I performed the CTRL-C, the Clipboard class from System.Windows.Forms did not return any data, but the system clipboard did. I looked into some of the low level user32 calls like OpenClipboard, EmptyClipboard, and CloseClipboard hoping that they would help my do this but so far I keep getting COM exceptions when trying to restore. I thought perhaps this had to do with the OpenClipboard parameter which is expecting an IntPtr window handle of the application which wants to take control of the clipboard. Since I mentioned that my application does not have a GUI this is a challenge. I wasn't sure what to pass here. Maybe someone can shed some light on that? Am I using the Clipboard class incorrectly? Is there a clear way to obtain the IntPtr window handle of an application with no GUI? Does anyone know of a better way to backup and restore the system clipboard?

    Read the article

  • How to extract files from Windows Vista Complete PC Backup?

    - by Martin
    Is there a program or API I can code against to extract individual files from a Windows Vista Complete PC Backup image? I like the idea of having a complete image to restore from, but hate the idea that I have to make two backups, one for restoring individual files, and one for restoring my computer in the event of a catastrophic failure.

    Read the article

  • How can I schedule a daily backup with SQL Server Express?

    - by edosoft
    I'm running a small web application with SQL server express (2005) as backend. I can create a backup with a SQL script, however, I'd like to schedule this on a daily basis. As extra option (should-have) I'd like to keep only the last X backups (for space-saving reasons obviously) Any pointers? [edit] SQL server agent is unavailable in SQL server express...

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Rolling Back Microsoft CRM during testing

    - by npeterson
    Process related question: Currently we have a multi-tenant installation of MS CRM 4.0 on three servers, Dev, Test, and Live. We are actively working on customizing one of the tenants, but the others are static. During user testing, we often find it necessary to 'start fresh' in one of the tenants. Is it better to try and delete out the changes from the tenant (created accounts, leads, etc), or just revert the database to a backup from before the testing started? Is there compelling reasons why bulk delete is not advisable for MSCRM or that reverting the database frequently could cause issue?

    Read the article

  • Can Windows Home Server be used on an active directory domain?

    - by Parvenu74
    The situation: an Active Directory network with a few dozen machines. Most of the machines have the same vanilla image applied to them so if there was a hard drive failure getting the machine back up to the standard network image would be quick and easy. However, there are a handful of (eight) machines which have rather unique setups (accounting, developers, the "artist" with CS4 and such). For these machines we would like to use Windows Home Server since the backups are automatic and recovery from a machine failure is quite painless. The question though is whether or not WHS can be used on an A/D network. If not, what "set it and forget it" backup/imaging product is recommended for this scenario?

    Read the article

  • Checking if your SIMPLE databases need a log backup

    - by Fatherjack
    Hopefully you have read the blog by William Durkin explaining why your SIMPLE databases need a log backup in some cases. There is a SQL Server bug that means in some cases databases are marked as being in SIMPLE RECOVERY but have a log wait type that shows they are not properly configured. Please read his blog for the full explanation and a great description of how to reproduce the issue. As part of our (William happens to be my Boss) work to recover our affected databases I wrote this small PowerShell script to quickly check our servers for databases that needed the attention that William details.  cls $Servers = “Server01″,”Server02″,”etc”,”etc” foreach($Server in $Servers){ write-host “************” $server “****************”     $server = New-Object Microsoft.sqlserver.management.smo.server $Server     foreach($db in $Server.databases){         $db | where {$_.RecoveryModel -eq “Simple” -and $_.logreusewaitstatus -ne “nothing”} | select name, LogReuseWaitStatus     } } If you get any results from this query then you should consult Williams blog for the details on what action you should take. This script does give out false positives if in some circumstances depending on how busy your databases are. Hopefully this will let you check your servers quickly and if you find any problems you can reference Williams blog to understand what you need to do.

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Replacing all disks in a non-OS RAID 5 volume

    - by molecule
    Hi all, We currently have a server with 8 x HDD slots. It is a HP DL380G5 with a P400 controller. 2 x HDD are in a RAID 1+0 config and this hosts the OS. 6 x HDD are in a RAID 5 config and holds an Oracle DB. Basically the RAID 5 volume is running out of space and we would like to swap all 6 with higher capacity disks. Excuse my ignorance as I am pretty new to this... I believe we will need to backup the data, delete the RAID volume, insert the new disks, recreate the volume, and restore the data. 2 questions: Do we need to worry about the OS partition or is it completely independent so we can simply take out the 6 and insert 6 new disks and get the controller to recognize the 6 new disks and form a new RAID 5 volume? We should not need to reinstall OS or Oracle correct? Since we are going to restore the data on the volume from another source (our vendor will take care of this) but we would like to keep the existing data on the 6 disks just in case we run into issues and want to fall back, is this possible? Thanks in advance.

    Read the article

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

  • How can I tell if a SQL Server database is being backed up

    - by Guy
    Is there a way to programmatically determine if a SQL Server backup is currently being performd on a particular database? We have automated database backup scripts for both data and log files, where the databases are backed up nightly and log files are backed up every 15 minutes, 24 hours a day. However, we think that the log file backup job is failing if it runs the same time as the full backup is being run. What I'd like to do is to make a change to my transaction log script to not run a transaction log backup while the full backup is being run. If there a DMV or a system table that I can query and work this out?

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >