Search Results

Search found 7618 results on 305 pages for 'backup exec'.

Page 50/305 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Offline backup synchronization

    - by Pavan Kumar
    There is a Central Server running Windows Server 2003 and SQL Server 2005 and there are 7 client machines situated in various places and has XP Pro & SQL Server 2005 installed in all of them. They are not interconnected so they are physically seperate. One person goes to each of these centers maybe twice a month and takes the backup (Full database consisting of mdf and ldf files) with a pen drive and brings it to the Central server which contains the central database holding same schema as all the other client databases. I need to synchronize each backup database (belonging to different centers) one by one to update the existing data or inserting new data in the central database . The solution i got was Replication. The pendrive is brought to central server consisting of 7 instances of the databases and then the databases is attached to the central server one by one to the same SQL Server where the central database exists. Then my idea was to replicate the backup database one by one i.e using single subscription (Central Database) and multiple publication ( i.e 7 instances of databases in my case) toplogy by performing replication locally (i.e in the same machine). So i tried to develop a UI in C# .Net to programatically run the Transactional Replication with push subscription using RMO Programming (which is incomplete as of now because there is no point in developing when you already know it is not the solution). Transactional Replication can either be set to initialize with a snapshot or without a snapshot. If i go for the first option i.e with a snapshot , the data whatever is present in Central Database is overwritten by the new data . So the data present initially in the central database is lost. If i try to initialize without snapshot , no data (the data already has the updated and new data) will be sent from the backup database to server. The replication will work in a scenario where any incremental changes is done only after you set the replication . So the initial data whatever was present in the backup database when setting up the replication will not be replicated when running the snapshot agent for the first time to synchronize. Only changes in the backup database thereafter will be reflected to the central database .(Remember I am not going to insert new data or make any changes to the backup database after i attach it to the Central Server. ) So this solution is not feasible. I want a solution for synchronizing from one client database to central database present in the same machine using C#.NET. If you can provide me small example maybe with two databases(with same schema) DB1(Client) to DB2(Server) consisting of one or two tables it will be very helpful. The synchronization is not bidirectional.I want to only update existing data or insert new data from DB1 to DB2 (DB2 may contain some data initially). Thanks and Regards Pavan

    Read the article

  • Restore database to the point of disaster

    - by TiborKaraszi
    This is really basic, but so often overlooked and misunderstood. Basically, we have a database, and something goes south. Can we restore all the way up to that point? I.e., even if the last backup (db or log) is earlier than the disaster? Yes, of course we can (unless for more extreme cases, read on), but many don't realize/do that, for some strange reason. This blog post was inspired from a thread in the MSDN forums, which exposed just this misunderstanding. Basically the scenario was that they...(read more)

    Read the article

  • Restore database to the point of disaster

    - by TiborKaraszi
    This is really basic, but so often overlooked and misunderstood. Basically, we have a database, and something goes south. Can we restore all the way up to that point? I.e., even if the last backup (db or log) is earlier than the disaster? Yes, of course we can (unless for more extreme cases, read on), but many don't realize/do that, for some strange reason. This blog post was inspired from a thread in the MSDN forums, which exposed just this misunderstanding. Basically the scenario was that they...(read more)

    Read the article

  • Incremental file system backups

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems.

    Read the article

  • Clonezilla failing as soon as image copying begins

    - by mmr
    I have been trying unsuccessfully to create an image of an Ubuntu 10.04 laptop system. As soon as the copying itself starts, the entire system crashes to a black screen. I suspect that the problem is overheating, and that's why I put an ice pack under the machine. That seems to have helped a bit, but it's still not getting through the copying process. Is there any other possible explanation for dying to a black screen like this? I'm just not relishing the task of removing the hard drive, mounting it elsewhere, and then doing a backup that way.

    Read the article

  • Install a Wii Game Loader for Easy Backups and Fast Load Times

    - by Jason Fitzpatrick
    We’ve shown you how to hack your Wii for homebrew software and DVD playback as well as how to safeguard and supercharge your Wii. Now we’re taking a peek at Wii game loaders so you can backup and play your Wii games from an external HDD. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Lord of the Rings Movie Parody Double Feature [Video] Turn a Webpage into an Asteroids-Styled Shooting Game in Opera Dolphin Browser Mini Leaves Beta; Sports New GUI, Easy Bookmarking, and More Updated Google Goggles Scans Faster; Solves Sudoku Puzzles Snowy Castle Retreat in the Mountains Wallpaper Fix TV Show Sorting Issues on iOS Devices

    Read the article

  • Are log records removed from ldf file for rollbacks?

    - by TiborKaraszi
    Seems like a simple enough question, right? This question (but more targeted, read on) was raised in an MCT forum. While the discussion was on-going and and I tried to come up with answers, I realized that this question are really several questions. First, what is a rollback? I can see three different types of rollbacks (there might be more, of course): Regular rollback, as in ROLLBACK TRAN (or lost/terminated connection) Rollback done by restore recovery. I.e., end-time of backup included some transaciton...(read more)

    Read the article

  • sbackup: can not mount FTP automatically

    - by ledy
    In the sbackup configuration GUI i set the ftp://user:pw@online/storage and it's marked as successfully connected. After daily backup time, I checked the ftp and it was empty. In the error mail it says: Error in _do_mount: volume doesn't implement mount [ERROR_NOT_SUPPORTED - Operation not supported for the current backend.] Unable to mount: volume doesn't implement mount File access manager not initialized When restarting the sbackup GUI, it is no longer connected to the ftp and i have to click the button again to connect the remote directory - although it still knows my user/pw. How to save this permanently?

    Read the article

  • Backing up bind mounted folders

    - by NahsiN
    My layout is as follows. LVM Setup: /dev/VG/Documents, /dev/VG/Music, /dev/VG/Pictures, /dev/VG/Music, /dev/VG/Documents, etc.... Each of the LVMs is bind mounted to the corresponding folder name in /home/foo. For example, /home/foo/Documents bind mounted to /media/Documents (mount point of /dev/VG/Documents), etc. If I set up deja-dup to just back up my home folder, am I guaranteed that everything from my LVMs will be backed up properly? So let's say I take away my LVMs for some reason and choose to restore an earlier backup. My home folder will contain everything from the LVMs? All my docs, music, vids etc. My intuition tells me everything will be fine but it doesn't hurt to ask the the experts ;). Hope I have made myself clear. Thanks

    Read the article

  • rsync verify a file already exists in dest folder so it will skip the copy on the 1st sync

    - by joel_gil
    I have been looking at different tutorials about rsync about some specific situation I have. I have a home server with all my pics, this server is my backup, my PC is the one that receives the new pics and until now i had been manually copying and pasting new photos from the PC to the server. I was trying to setup rsync to do this automatically and in principle, it does without problem. Now the issue; when I fire up rsync it start copying all the files, even the ones were already in the destination (this is because it is the 1st sync). so my question is: Is it possible for rsync to verify that a file is the same (name/size/bin) so it will skip the copy on the 1st sync?

    Read the article

  • After I backed up my ubuntu 12.04, my system crashed

    - by user95490
    Outline: I use the way shown on http://ubuntuforums.org/showthread.php?t=35087 to backup my linux system, but somehow when I restore it, it crashes. Problem Description: When I reboot, there's an information: Gave up waiting for root device. Common problems: -- Boot args (cat /proc/cmdline) -- Check rootdisplay = (did the system wait long enough?) -- Check root = (did the system wait for the right device?) -- Missing modules (cat /proc/modules; ls dev) ALERT! /dev/disk/by-uuid/9cf6f563-86d1-47be-bc26-92dd7df35cb3 does not exist. Dropping to a shell!

    Read the article

  • How to automount usb drive reliably without fstab

    - by user103279
    Hi I need a way to mount a usb drive without using fstab. I Cannot use fstab because the drive is not connected to my computer at boot. This causes an issue during any one off reboots because start up hang waiting for this device until a keyboard intervention to skip it. I cannot use my current script with just does mount /dev/sde1 /media/Backup because sometimes it changes to sdf. Consider this a server install. I can't use tools at the user or GUI level. I suppose the sum of my question is how to manually mount a usb drive from the commandline considering the reliability of the /dev/sd value isn't consistent. Thanks,

    Read the article

  • Deciding between Apache Commons exec or ProcessBuilder

    - by Moev4
    I am trying to decide as to whether to use ProcessBuilder or Commons exec, My requirements are that I am simply trying to create a daemon process whose stdout/stdin/stderr I do not care about. In addition I want to execute a kill to destroy this process when the time comes. I am using Java on Linux. I know that both have their pains and pitfalls (such as being sure to use separate thread to swallow streams can lead to blocking or deadlocks, and closing the streams so not to leave open files hanging around)and wanted to know if anyone had suggestions one way or the other as well as any good resources to follow.

    Read the article

  • i386-mingw32-g++: error trying to exec 'cc1plus': execvp: No such file or directory

    - by Cathy
    If I compile this QT c++ program in SuSE Linux include using namespace std; int main () { cout << "Hello World!"; return 0; When I type i386-mingw32-g++ helloworld.cpp I get the following error i386-mingw32-g++: error trying to exec 'cc1plus': execvp: No such file or directory Is this because MinGW package which i installed contains only gcc in it.. hence i downloaded gcc-g++-3.4.5.rpm package and just copy pasted i386-mingw32-g++ and cc1plus executable along with C++ include files. Pls reply. Thanking You

    Read the article

  • CruiseControl: How to read logs from exec task

    - by Marty
    I start an external groovy script via cruisecontrol, which basically works. My problem is that if the groovy script fails I only get the "error string found" in my cruise webapp and email; its even not in the log files. The groovy script writes it output to stdout and to a logfile. How it is possible to display the output of an external script in the cruisecontrol logs? <project name="proj"> <schedule> <exec workingdir="/myscripts/folder" command="//bin/groovy" args="build.groovy -p ${project.name}.properties" errorstr="Exception"/> </schedule> </project>

    Read the article

  • Visual Studio Add-in Exec running Automatically

    - by NewProgrammer
    Hey guys, I have a dilemma that I am uncertain about, as I not sure if it's is exactly possible for a Visual Studio Add-in to run its code automatically. I need an add-in that can run passively, like a logger for Visual Studios. However, the Exec method that I know so far can only execute commandbar functionality, but I need the code to execute when the user right-clicks, or select a line of text. I was able to make an automatic logger if i put my code in the "querystatus", but that would be considered bad programming, and it does not log when I simply select a piece of text. Does anyone know how to make a passive or automatic running code in Visual Studios?

    Read the article

  • PHP exec problem with s3-put

    - by schneck
    Hi there, I use the s3-bash-project to upload data to an S3-Bucket. My command looks like this: /mypath/s3_bash/s3-put -v -k '123456789' -s '/mypath/secret' -T '/mypath/upload/myuploadfile' '/my.bucket/mykeyname' I can run the command from the command line (Mac OS X), and it works well. Now I want to execute it from a PHP-Script: exec($command, $output); but in output, the "s3-put"-command only returns the command's help text. I log the command, and it works if I c&p it from the log the the command line, so there not a problem. It seems that PHP does not pass all the parameters to the command line, although I run escapeshellarg() over all the parameters. I'm using a local XAMPP-Test environment, safe_mode is off. Any ideas?

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • transfer itunes iphone backup from a PC to MAC

    - by Bala R
    I upgraded my phone to iOS 4 GM on my mac using itunes 9.2 beta for mac. I have a backup of my contacts on my windows machine with itunes 9.1.1 which won't talk to iOS 4. There is no itunes 9.2 for windows yet. Is there any way I can transfer the backup from my windows pc to my mac so i can restore my settings and contacts?

    Read the article

  • Will Yosemite Server backup 8.1 work on Windows Server 2008 R2 to backup Exchange 2010?

    - by best
    I've inherited the backup support of some Windows server 2003 machines that used Yosemite Server backup 8.1 The company has joined the BizSpark program and want to use the license of Windows Server 2008 R2 and Exchange 2010. I've tried to email barracuda who took over Yosemite to ask this but with no success. (do you need a support contract?) I don't have a spare machine or even space for a VM to test it on, does anybody know it it will work?

    Read the article

  • Backup virtual machine

    - by Lucas
    I use VMWare Fusion on Mac OSX and backup my system with Time Machine. Now I read, that this will not backup my virtual machine with usable results. What are my possibilities to back up my virtual machine?

    Read the article

  • Sql compression and backing up in sql server 2005

    - by cagin
    Hi there I want to backup my database with compression. This is my code : BACKUP DATABASE dbbbb TO DISK = N'C:\\dbbb.bak' WITH COMPRESSION this running correctly in Sql Server 2008. But my server has Sql Server 2005 and COMPRESSION is not a recognized BACKUP option in 2005. How can i compress my backup in 2005 Thank you for your helps.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >