Search Results

Search found 7545 results on 302 pages for 'backup restore'.

Page 95/302 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Filesystem synchronization library?

    - by IsaacB
    Hi, I've got 10 GB of files to back up daily to another site. The client is way out in the country so bandwidth is an issue. Does anyone know of any existing software or libraries out there that help with keeping a folder with its files synchronized across a slow link, that is it only sends files across if they have changed? Some kind of hash checking would be nice, too, to at least confirm the two sides are the same. I don't mind paying some money for it, seeing as how it might take me several weeks to a month to implement something decent on my own. I just don't want to re-invent the wheel, here. BTW it is a windows shop (they have an in house windows IT guy) so windows is preferred. I also have 10 GB of SQL Server 2000 databases to go across. Is the SQL server replication mode reliable? Thanks!

    Read the article

  • What does BucketAlreadyOwnedByYou error (from Amazon S3) actually mean? I can't find any reason affe

    - by Phyo Wai Win
    Hi there, I am using Amazon S3 to back up my Rails app's mysql database. And I am using astrails-safe plugin to do that and I got the "Your previous request to create the named bucket succeeded and you already own it. (AWS::S3::BucketAlreadyOwnedByYou)" error back whenever I try to update it. I have checked that the folder in which I am going to back up is there in my account already. It's just that I can't upload the files from the code (using astrails-safe). Any help would be appreciated! Thanks.

    Read the article

  • Tools to work with App Engine data dumps

    - by Thilo
    Using the bulkloader.py utility you can download all data from your application's Datastore. It is not obvious how the data is stored, however. From the looks of it, you get a SQLite file with all data in binary format in a single table: sqlite> .tables bulkloader_database_signature result sqlite> .schema result CREATE TABLE result ( id BLOB primary key, value BLOB not null, sort_key BLOB); Are there any tools to work with this data?

    Read the article

  • importing a project into a svn repository question

    - by ajsie
    im using netbeans for svn. i open a project in netbeans and then i import it to a svn repo. it seems that although im only importing the project folder, svn creates .svn folders in all folders within this project folder. why is that? i thought that i was only creating .svn folders to checked out projects, not imported ones? now this folder acts very weird, when i open this folder as a project in netbeans, netbeans treats it like a svn folder some way. is this normal? cause i want this one to not be under SVN.

    Read the article

  • Mercurial between server and local?

    - by artmania
    I have a portal development work in process... I had some troubles time to time like losing, overwriting wrong files, etc... So I decided to go for Mercurial for this development. My first experience with Source Control. I work on server [bluehost] for this project, is there any way to keep update backups at local? Do I have to setup Mercurial to Bluehost? any way to sync changes on server to my local mac?

    Read the article

  • SQL Server 2005: When copy table structure to other database "CONSTRAINT" keywords lost

    - by StreamT
    Snippet of original table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL CONSTRAINT [DF_Batch_CustomerDepositMade] DEFAULT (0) Snippet of copied table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL, Code for copy database: Server server = new Server(SourceSQLServer); Database database = server.Databases[SourceDatabase]; Transfer transfer = new Transfer(database); transfer.CopyAllObjects = true; transfer.CopySchema = true; transfer.CopyData = false; transfer.DropDestinationObjectsFirst = true; transfer.DestinationServer = DestinationSQLServer; transfer.CreateTargetDatabase = true; Database ddatabase = new Database(server, DestinationDatabase); ddatabase.Create(); transfer.DestinationDatabase = DestinationDatabase; transfer.Options.IncludeIfNotExists = true; transfer.TransferData();

    Read the article

  • backing up user data

    - by shani
    in my app the user saves data in archive using core data and sqllite. 1. is there a way letting him the option to back up his data and restoring it in the future? 2. does the user info is backed up with the iphone regular back up? thanks shani

    Read the article

  • Syncing large personal school-material -git-repo with things such as casual notes? Rsync, wget and Git -- or some ready tool?

    - by hhh
    My friend wants to store electrically her school -notes and process them fast, with backups. She has over 2GB -size repo already and growing all the time (mostly appended material i.e. more school notes, different formats, pdf, pictures and scanned, some text -files, etc). The goal of my friend is to process fast the notes. I suggested command like this here i.e. "# crontab -e @weekly wget --random-wait -e robots=off -U mozilla -mirror http://VeryLong.com". But I think plugging in Rsync somewhere could make it much better with Git. How would you help my friend to process and store the school -material under Git-version-controlling and still keep the size reasonable? Perhaps related rsync .git directory rsync git big repository Different scope Git/rsync mix for projects with large binaries and text files What's a good way to organize a large collection of personal scripts using git?

    Read the article

  • How do I back up a remote SVN repository

    - by Roaders
    Hi all I am currently moving my SVN server from my home server to my remote server so I can access it more easily from other locations. My remote server is not backed up so I want to regularly back it up to my home server. Remote server is Windows 2003 server. Home server is Windows Home Server. What is the best way to do this? can I get my home server to get a dump of the remote server every night? Bandwidth isn't a huge consideration but if I could just copy any new checkins to an SVN server on my home server that would be fine. Any suggestions welcome. Thanks

    Read the article

  • How to version control data stored in mysql

    - by Shawn
    I'm trying to use a simple mysql database but tweak it so that every field is backed up up to an indefinite number of versions. The best way I can illustrate this is by replacing each and every field of every table with a stack of all the values this field has ever had (each of these values should be timestamped). I guess it's kind of like having customized version control for all my data.. Any ideas on how to do this?

    Read the article

  • Keeping track of dirty blocks on a block device

    - by mikeY
    I'm looking for a way to keep track of what blocks on a block device are modified after a point in time. How I eventually want to use this for is to keep two 2TB disks in sync, one which only comes online (connected through USB) once a month. Without knowing what blocks have been modified, I have to go through the whole 2TB every time. I'm using a recent GNU/Linux OS and have C and Python experience. I'm hoping to avoid writing kernel level code as I don't have any experience in that area whatsoever. My current theory is that there should be some hooks somewhere where my code can get called when a disk flush is performed. Any ideas?

    Read the article

  • Is this safe? Is this OK to do in MYSQL?

    - by alex
    I have always done this: mysqldump -hlocalhost -uuser -ppass MYDATABASE > /home/f/db_backup/MYDATABASE.sql mysql -uuser -ppass MYDATABASE < MYDATABASE.sql But, if I do this instead...is this safe? Is this identical to the above??? mysqldump -hlocalhost -uuser -ppass MYDATABASE | gzip > /home/f/db_backup/MYDATABASE.sql.gz zcat MYDATABASE.sql.gz | mysql -uuser -ppass MYDATABASE

    Read the article

  • Should I be backing up a webapp's data to another host continuously ?

    - by user196289
    I have webapp in development. I need to plan for what happens if the host goes down. I will lose some very recent session status (which I can live with) and everything else should be persistently stored in the database. If I am starting up again after an outage, can I expect a good host to reconstruct the database to within minutes of where I was up to ? Or seconds ? Or should I build in a background process to continually mirror the database elsewhere ? What is normal / sensible ? Obviously a good host will have RAID and other redundancy so the likelihood of total loss should be low, and if they have periodic backups I should lose only very recent stuff but this is presumably likely to be designed with almost static web content in mind, and my site is transactional with new data being filed continuously (with a customer expectation that I don't ever lose it). Any suggestions / advice ? Are there off the shelf frameworks for doing this ? (I'm primarily working in Java) And should I just plan to save the data or should I plan to have an alternative usable host implementation ready to launch in case the host doesn't come back up in a suitable timeframe ?

    Read the article

  • What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

    - by Josh Schwartzman
    I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1. What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data? As well, when loading the data into the second DB, is it quicker to: 1) pipe the results directly to the second MySQL server instance and use the --compress option or 2) load it from a text file (ie: mysql < my_sql_dump.sql)

    Read the article

  • How to create a snapshot or clone of PHP, MySQL page... Inspiration needed

    - by jimbo
    Hi, We have a web application that creates a dynamic PHP page with all the MySQL stored details a user has entered via a number a forms. So far so good, but we want this information stored some how to be refereed to at a later date, as an administrator can make changes to the data, which reflects on calculations that are worked out from this saved data. When going back over this saved data we need to be able to see all the information submitted for that particular calculation, so if that data has changed we will see what is was relating to that calculation. Now we have thought about maybe a snapshot when the calculation is done, pdf of the webpage or something similar would do, but is this simple to do? I hope this makes sense...

    Read the article

  • Can a FreePBX backup be restored to a different version?

    - by Tim Long
    I run a small PBX based on the FreePBX distro of Asterisk. The installation has been steadily upgraded but for various reasons, we want to start again on a new server with a clean install from the distribution media. Will I be able to take a backup from the old server and restore it to the new server, even though the installs are different versions? How sensitive are FreePBX backups to the build version? Is it possible to get at least a partial restore?

    Read the article

  • Storage servers architectural solution for backup. What is the best way? (pics inside)

    - by Kirzilla
    Hello, What is the best architecture for storage servers array? Needs... a) easy way to add one more server to array b) we don't have single backup server c) we need to have one backup for each "web" part of each server Group #1 : is cross-server-backuping scheme; the main disadvantage that we can't add one more server, we should add 2 servers in one time. Group #2 : is a Group #1, but with three and more servers. It also have a disadvantage - to add one more server we should move existing backup to it. Any suggestions? Thank you. Thank you.

    Read the article

  • backuppc - how to backup remote (over the internet) clients?

    - by Scott
    I am testing out backuppc, which works great so far backing up windows clients on a LAN via SMB (no backup client/agent required). However I have quite a few laptops and desktops that are in various remote locations - some of which move around. I need some way to have that remote computer create an outgoing connection for backup purposes (Windows XP/7). I know backuppc supports smb, rsync and 'tar', but I believe these are all connections going from the server TO the client. SO, I either need a way to vpn the client on a timed basis, or it would be a lot better if the client could some how connect to the server (ssh?) and initiate it's own backup somehow (rsync?). Of course this all needs to be pre-installed by me and require no maintenance by the end user, no dialogs on their side. What do you think?

    Read the article

  • How can I re-program/flash a backup of the bios?

    - by user285705
    I have some computers that have a particular bios setting that keeps everything running smoothly. The setting is not the default setting for the motherboard. So, when the CMOS battery dies, the setting is erased and causes the user problems. How can I backup the bios and settings I have now, and flash that file onto my entire stock of computers? I have attempted to use awdflash to backup my bios and then attempt to write that backup to the ROM chip, but I keep getting an error. It tells me that my file number doesn't match the system, or something like that. Basically, the file is incompatible with the chip. But I just backed it up from that chip. If anyone can shed some light on this for me it would be helpful.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >