Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 48/230 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Unable to create a Windows 7 system image of a failing hard drive

    - by Rahul
    The hard disk of my one year old T400 Thinkpad has started failing periodic hardware tests. I get a "Targeted Read Test Failed" error. The "SMART short self test" times out. I am now trying to create a Windows 7 System image of the hard disk but it fails without giving any specific error messages. I tried using Comodo Backup but got an error (code 101117) there as well. I have copied the important files in Dropbox but would like to take a full System backup as I have plenty of software installed on the machine. Does anyone know why this is happening and how I can take a backup of the system image ?

    Read the article

  • Is there a way to schedule an automatic WinClone run on my Bootcamp partition?

    - by user17873
    Hi, I've currently got time machine setup to back up my entire OS X installation. I also have a backup tool within my Bootcamp Windows 7 installation which automatically backs up my windows profile data to an external drive partition. Finally, I'm also backing up my Bootcamp partition weekly and storing on an external drive using WinClone. The final piece I need to complete my external backup process is to have the WinClone application backup my Bootcamp partition automatically once a week rather than having to call it manually and remember. Is this possible?

    Read the article

  • How to harden windows 2003 service account

    - by ITMan
    I remember there was a tech-net or WindowsITpro article about how to harden windows 2003 service accounts a couple of years ago. For backup software purposes (such as BackupExec / AppAsure / Etc.. , please don't bash these) I have to create a domain admin account (usually called something such as "backup") and have the services run from that account. In this article I remember you can create the domain admin user "Backup" however have it not able to login interactively. Do any of you remember such an article or have the knowledge on how to do it?

    Read the article

  • Replicate a big, dense Windows volume over a WAN -- too big for DFS-R

    - by Jesse
    I've got a server with a LOT of small files -- many millions files, and over 1.5 TB of data. I need a decent backup strategy. Any filesystem-based backup takes too long -- just enumerating which files need to be copied takes a day. Acronis can do a disk image in 24 hours, but fails when it tries to do a differential backup the next day. DFS-R won't replicate a volume with this many files. I'm starting to look at Double Take, which seems to be able to do continuous replication. Are there other solutions that can do continuous replication at a block or sector level -- not file-by-file over a WAN?

    Read the article

  • Turning a (dual boot) Ubuntu 9 into a virtual machine

    - by xain
    I have a dual boot box (Ubuntu 9 and Vista) and I'm about to upgrade Vista to Win 7. Being Ubuntu my main development environment, I'd like to use it as it is from the new environment via VirtualBox or VMWare. I know tools like clonezilla that backup entire drives; in my case, the linux partitions are distributed between several disks which in turn contain both linux and windows data. My intention is to use some backup tool (like Clonezilla if it fits) that allows me to ONLY backup the linux partitions distributed in several disks. Any hints ? Thanks in advance.

    Read the article

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • Get Directory Size through SFTP

    - by Nongo
    A client has a managed websever; on this server is an e-commerce script and part of the script dumps a backup every week. The backup is stored on the web server (not in the HTTP route). The ISP takes a copy, and my clients wants to take a copy too. What I am trying to do is before downloading the file I want to be able to calculate the backup directory size - but the only access I have is through SFTP. Is it possible to easily get the directory size and then use this in a PowerShell Script. NOTE: I have written an automatic download script in Powershell, and I want to extend this. Forgive me if this sounds vague I can provide further info if you have any specific questions.

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Backup Exec job completed with exceptions: RWS_AttachToDLE

    - by HannesFostie
    2 of this weekend's jobs completed with exceptions, and mention "RWS_AttachToDLE". I get the feeling the job did in fact complete without missing data, but I would like to be 100% sure (and can't verify the backup myself right now - colleague is out of the office and the backup in question is a bit of a black box for me, it works but I am not familiar with its inner workings). Also, how can I prevent this from happening? Google didn't prove to be very helpful, and experts exchange seem to have changed their system so that you can't simply scroll down to see the answers to a particular question ;-)

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • amanda backup problem

    - by hossam alkhalili
    hello, i installed amanda on centos 5.5 to backup windows 7 and windows server 2008 over network and i used 15 minutes instillation guide but when i type amcheck DailySet1 i got request failed then if i type amservice when i amandabackup account to define the problem i got Permission denied and on root account i got OPTIONS features=ff7fffff9cfeffffd3cf1300; i use zwc on windows 7 as an agent can anyone help me thanks -sh-3.2$ amcheck DailySet1 Amanda Tape Server Host Check Holding disk /dumps/amanda: 1791315968 KB disk space available, using 1791213568 KB slot 1: volume 'DailySet1-01' Will write to volume 'DailySet1-01' in slot 1. NOTE: skipping tape-writable test NOTE: conf info dir /etc/amanda/DailySet1/curinfo does not exist NOTE: it will be created on the next run. NOTE: index dir /etc/amanda/DailySet1/index does not exist NOTE: it will be created on the next run. Server check took 0.880 seconds Amanda Backup Client Hosts Check WARNING: jrcbs01.jrc.local: selfcheck request failed: Connection refused Client check: 1 host checked in 10.020 seconds. 1 problem found. amservice 192.168.1.1 bsdtcp noop [root@jrcbs01 ~]# amservice 192.168.1.5 bsdtcp noop

    Read the article

  • SQL Server Rights to backup drive

    - by Sam
    I'm trying to copy a backup I've made from one server to another using either an SSIS or Powershell step in a job. I've run into the same error on both systems when running the step under the sql agent. I receive errors that the path does not exist. I've tried granting the agent rights to e:\backups, where the file is located, but it still doesn't work. When I use a proxy for the step, it works fine. Can anyone help me with what permissions to grant to sqlagent? Rights look to have been granted to MSSQL$Instance1 on the backup drive.

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • How to set up an exim backup mail server

    - by luciano rinetti
    i am using Exim for some years (now i have v4.74 on Ubuntu Server 11.04) with good results, with ClamAV and SpamAssassin. Now i'd like to set up a backup server to improve the continuity of service. Reading the official Exim doc. (specifications and the Philip Hazel book on Exim4 2nd Edition) i don't found a complete guide to implement a synchronized structure (primary + backup). Please could you show me a document/s or URL that let me set it up and offer a better service ? Best Regards luciano

    Read the article

  • Automatizing the backup of my databases and files with cron

    - by Patrick
    hi, I want to automatize the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" 1) First of all, is this a good approach ? 2) It is not clear how to specify the repository and the working copy with svn" 3) How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts Any other tip ? thanks

    Read the article

  • Symantec CPS / Backup Exec 11D Service stuck in "Starting" Status

    - by user42289
    I have two Windows 2003 (one is SE, one is SBS) both SP2, both are Virtual Machines of Microsoft Virtual Server 2005 R2. All of a sudden about 2 weeks ago, the Symantec Backup Exec / CPS 11D stopped working on them. One is the Media server, one is our Exchange 2003 Server. There is another copy of CPS on our file server that the service is running fine on. However the one that is fine is not a VM. When I say stop working, the "backup exec continuous protection agent" service is stuck in "starting" status. On the non Exchange server I've tried uninstalling the last Windows Updates that were run some time around the time of failure. I've tried repairing the install of CPS. I've tried uninstalling it and reinstalling. Exact same problem in the end.

    Read the article

  • Automate the backup of my databases and files with cron

    - by Patrick
    hi, I want to automate the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" First of all, is this a good approach? It is not clear how to specify the repository and the working copy with svn? How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts

    Read the article

  • Mac OS X Server mobile account VS Time Machine Network Backup

    - by elhombre
    I am installing a Server @home to manage the mac client's of my family. First I wanted to make time machine Backups over the internal network to an external Hard-drive which is connected to my Mac OS X Server (10.6) but when I read about the mobile accounts and it's synchronization features I got a little bit irritated what the differences between the two Services are. So where are the differences between a mobile account and a Time Machine Backup which is made over the network? Can the synchronized mobile Account be backup to an external Harddisk, if yes, how?

    Read the article

  • Restoring Subversion repositories from backup

    - by John Hoge
    Hi, I had to restore a subversion server from a backup image taken the previous night. Everything worked fine after the restore except for one repository. A working copy had been committed on the server after the latest backup, so this working copy had newer files than the restored repository. I tried to commit the files using tortoise, but SVN didn't recognize that the files on the working copy were newer than those in the repository. I'm using Subversion Server 1.6.5 on Windows 2003 Server and TortoiseSVN 1.6.8 64 bit on a Win7 64 bit client. Thanks, John

    Read the article

  • using wbadmin to backup and recover

    - by g7rpo
    HI I am using wbadmin to perform backups of a specific folder, primarily to backup my VHD files this is working fine but I tried to recover the files today using a different machine to the one which created the backup and couldnt get the machine doing the recovery to 'see' the backups. Is there a way to do this as my worry is that if I have a failure on the host which is perfmorming the backups I need to be able to install hyper-v on another host and recover the backed up VMs to there until I can rebuild the host. It appears that this isnt possible, I am hoping I am missing something. Any help would be greatly appreciated.

    Read the article

  • Keepalived with apache unable to bind interface on Backup server

    - by davideagle
    I have two debian 6 servers running keepalived 1.1.20 with one server acting as a Master and the other as a Backup. Both servers host apache 2.4 that have a global Listener on all interfaces on port 80 (Listen *:80) how ever I have some sites that require a listener for port 443 (SSL) and that is configured for each VirtualHost in the Apache config since I do not want every VirtualHost to listen on port 443. The problem is when I try to start Apache on the Backup machine that does not hold the virtual interface the VirtualHost is supposed to be listening on, I get AH00072: make_sock: could not bind to address 1.1.1.1:443. I know this is expected behavior of Apache. The real question is are there any known workarounds or solutions to this scenario?

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • How to backup metro "app" data, manually

    - by ihateapps
    I'm a PC tech and I have been getting more and more Windows 8 issues. First off I hate metro, and secondly I hate "apps". I like to do fresh installs of Windows8. I backup my user's data and then fresh install. Assuming there is no recovery partition how can I manually backup the user's app data so that in the event of a reformat I simply redownload their apps (dumb, I wish there was a way to actually back them up too) and plop the data back in. Do most normal people even use "apps", or do they use Windows8 like a desktop OS? I totally dropped metro with ClassicShell.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >