Search Results

Search found 9620 results on 385 pages for 'backup profile'.

Page 53/385 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • User Profile cannot be loaded - Windows 7

    - by Ryan
    After uninstalling an HP Vector Mouse driver, then rebooting, when Windows tries to auto log me in, i get an error message saying tyhe following: The User Profile Service failed the Login. User Profile cannot be loaded. Due to the fact that it is the only account on this PC, I cannot even go into another account. I rebooted the machine several times, before going into Safe Mode with Networking. For some reason, I cannot create a new account whilst in safe mode (I think it is to do with UAC, nothing with UAC is clickable). Thus, I am stuck. I cannot get into my account, nor can I create a new one to copy files over to. Any ideas? Thanks in advance! Ryan EDIT: System Restore was, for some reason turned off. Thus, I cannot restore to a working point.

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • ghettoVCB issue

    - by romgo75
    Hi, I setup ghettoVCB script in order to backup 3 VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folder, one for each VM. For each Folder I have th following files : -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finished the process. When I read the logs concerned by this folder I get : 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run anymore ghetto VCB because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle vm abckup rotate ? Any idea ? Thanks !

    Read the article

  • Symantec BE: How is data flow of backups/restore to storage pools?

    - by Kumala
    I am evaluating Symantec's BackupExec 2012 and was wondering how does the backup data flow from the server that as being backed up to the storage pool. E.g. My BE server is in city A, the server that I am backing up is in city B and the storage pool that I plan to use is also located in city B. When performing a backup, does the backup data flow from the server in city B to the BE server in city A and back to the storage pool in city B or is it possible to have the backup data go directly from server in city B to storage pool in city B?

    Read the article

  • SmartSVN - Unable to create new repository profile

    - by Sandeepan Nath
    I have just installed SmartSVN on this fedora system. The application starts (on running ./smartsvn.sh) with its usual UI but many things are not working. Creating New repository profile Trying to create a new repository profile (Repositories- Repository Profiles- Add) An Error occurred while processing an SVN command - Cannot connect to 'svn+ssh://192.168.0.103': There was a problem while connecting to 192.168.0.103:22 Quick Checkout Trying to do Quick Checkout (less configuration) An Error occurred while processing an SVN command - Malformed XML. Some Observations When I run the smartsvn.sh file like this:- ./smartsvn.sh It shows this in the console - Warning: /bin/java does not exist Could not lock /root/.smartsvn/_lock_ Switched to running instance I was using SmartSVN in another system before this where it was working. There too, it was showing the warning like Warning: /bin/java does not exist but this part was not showing:- Could not lock /root/.smartsvn/_lock_ Switched to running instance I have only JRE installed in both the systems and not JDK. So, what could be the reason? Any pointers? Thanks, Sandeepan

    Read the article

  • How to handle these variables in rsync exclude file?

    - by linux
    I have an ignore file for rsync but I can't figure out how to ignore this string of file names and the username: backup/cpbackup/daily/username/homedir/mail/cur/1244452567.H511146P7355.dwhs45.dwhs.net,S=2161:2, backup/cpbackup/daily/username/homedir/mail/cur/1244455430.H516330P14494.dwhs45.dwhs.net,S=4062:2, I tried this: backup/cpbackup/daily/*/homedir/mail/cur/* and this: *.*.dwhs45.dwhs.* But of course that would be too easy. Basically I just want to not transfer all the mail in the /cur/ directory for all users to the backups.

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • Cannot see main user profile directory on old vista hdd on win7

    - by chaoskreator
    I have an old laptop HDD that ran Vista that I need to get some pictures and movies off of. I've attached it via SATA cable to my new Win 7 (64 bit) machine and it mounts fine, except I can't see the main user profile in the D:\Users directory. I've changed ownership and permissions for the D: drive to my C:\ Username but still no luck. I read something about it being caused by the UAC being active on the Vista machine. Is this true? Is there a way to disable this and gain access to the main profile without putting it back into the old laptop (it's fried and won't boot)?

    Read the article

  • Windows 8 Restore Problems

    - by Joe
    I created a Windows 8 system image backup on a separate drive before I installed Linux, and during the Linux installation process I accidentally wiped out Windows. I now want to restore my Windows 8 backup that I have on the separate drive. I created a repair USB stick and I followed the directions according to this article. After selecting the image on the hard drive, I get this error: "To restore this computer, Windows needs to format the hard drive." I don't know what this means! The drive partitions are different now then they were when I backup up, so I don't know if that matters. I re-installed Windows and I can restore my files from this backup, but I don't think this covers the registry, etc. I want a full restore. Does anyone know how to fix this problem, or how to restore in a different way? Thanks!

    Read the article

  • Bash Script to Back Up Backs Up Itself

    - by Jay LaCroix
    I have the following bash script that creates a tar.gz of my filesystem on a Kubuntu PC. The problem is, that it also tries to backup the tar.gz backup file, even though I am storing the backup in /tmp and omitting /tmp from the backup. I am wondering why it's backing up the file in /tmp even though I told it not to. #!/bin/bash # init DATE=$(date +20%y%m%d) sudo tar -cvpzf /tmp/`hostname`_$DATE.tar.gz \ --exclude=/proc \ --exclude=/lost+found \ --exclude=/sys \ --exclude=/mnt \ --exclude=/media \ --exclude=/dev \ --exclude=/tmp \ --exclude=/home/jlacroix/Desktop \ --exclude=/home/jlacroix/Documents \ --exclude=/home/jlacroix/Music \ --exclude=/home/jlacroix/Pictures \ --exclude=/home/jlacroix/Projects \ --exclude=/home/jlacroix/Roms \ --exclude=/home/jlacroix/Videos \ --exclude=/home/jlacroix/.VirtualBox\ VMs \ --exclude=/home/jlacroix/.SpiderOak \ / scp /tmp/`hostname`_$DATE.tar.gz jlacroix@Pluto:/share/Recovery/Snapshots sudo rm /tmp/`hostname`_$DATE.tar.gz

    Read the article

  • Why can't this user connect to domain share?

    - by Saariko
    Part of my reorganizing credentials in the domain, I have created several users that will be used solely for services (backup, LDAP, etc) The idea is that systems that need specific usage will use a user/service user, that will give them what they need. However, I am having trouble setting the correct needed data. For this example, I have a NAS (Ready NAS 1100 by Netgear), that runs it's own backup jobs. The job reads from a domain share: \domain\qa and copies all data to another location. When using the domain\administrator everything works. When I input the domain\srv.backup user I get an error connecting to the folder. The srv.backup is part of the 'Domain Admins' group, which is a member of 'Administrators' I thought there might be propagation issues, but even when the srv.backup user was a direct member of 'Administrators' the error still occurred. I have 2 DC's (W2K8R2 replicas) - I thought that could also cause a problem, AFAIKT it's not the issue. Sharing permissions are open to everyone The Security on the folder is as follow This is the test window from the NAS dashboard I doubled check that the 'srv.domain' is part of the 'Domain Admins' group As well as tried with a simple 1-9 password. What else do I need to check? thanks.

    Read the article

  • Child Folder inheriting a permission that parent folder does not have (NTFS)

    - by just.another.programmer
    I'm reconfiguring roaming profiles on my network to use proper NTFS security settings according to this article. I have reset the following permissions on the roaming profile parent folder: CREATOR OWNER, Full Control, Subfolder and files only User group with profiles, List folder, Create folders, This folder only System, Full Control, This folder, subfolders, and files Then I select one of the actual roaming profile folders and follow these steps to fix the NTFS settings: Click Security, Advanced Uncheck "Allow inheritable permissions..." Choose "Remove..." Recheck "Allow inheritable permissions..." Click "Apply" After I choose apply, I get the following permissions listed on the roaming profile folder: Administrators (MYDOMAIN\Administrators) Full Control, This folder only CREATOR OWNER, Full Control, Subfolders and files only System, Full Control, This folder, subfolders, and files Where is the Administrators entry coming from!? There is an entry on the root of the drive for Administrators to have full control, but the Roaming Profile Parent folder is not set to inherit any permissions, and it does not have the administrators permission.

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • Writing to Samba share as different user?

    - by Hamid Elaosta
    I have a Samba share on my NAS drive mounter as follows: mount -t smbfs -o username=backup,password=backups_password //sharebox/SVNBackup /mnt/SVNBackup I am then trying to run: sudo svnadmin dump /usr/local/svn/repos/testrepo > /mnt/SVNBackup/test1.svn but I get: bash: /mnt/SVNBackup/test1.svn: Permission Denied The backup location is setup to accept access only from the user "backup" (who doesn't exist on the local system) How do I go about solving this problem? Thanks

    Read the article

  • Current alternative to the old CHECKSUM program

    - by faulty
    I'm looking for an application that does md5/sha hash check on specific files/folders periodically and store an index file per folder for future verification. I remember such application exist in DOS days, to detect files infected by virus. The main purpose for this is to detect corrupted copy of backup, as I understand that consumer grade hardware are not 100% error free when doing backup or file transfer from device to device. The hash can also be used to generate a list of changed files for backup. Most of the software I can find is hash manually. EDIT: Windows based application, preferably a shell extension which I can right click on a folder and do a checksum/verify all files in that folder. Even better if that can integrate with a backup/sync program like BeyondCopy

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • ntbackup workalike for adhoc full backups in Windows 7 thats free and preferably open source

    - by Justin Dearing
    On windows 2000 and XP machines I used to be able to do the following: ntbackup backup systemstate c: /f e:\backups\machineName\machineName-full+systemstate_200101206.bkf This gave me a full backup of the system that I could use to do a system restore, after doing a barebones OS install. Windows 7 has a great utility for regular backups with alerting and all that stuff. It does not seem to have command line support. I'd like a backup solution for my Windwos 7 systems that has the following features: Is free Is open source (preferebly) Works while the system is booted and leaves the system functional (clonezilla is great for offline backups, and I use that too) Gives me a backup that is suited for a full system restore or partial system restore (ruling out most imaging software even if they could work while the system is booted via some sort of shadow copy voodoo) Can work via the command line Compression would be nice, the ability to pipe output would be better.

    Read the article

  • Backups of Exchange 2007 SP3 using VSS are abnormally large

    - by Stew
    I have recently implemented Veeam backup and recovery 6.0, and have noted when backing up my exchange server via incremental updates, it is transferring way more data than expected. Backup is incremental, and setup to use VSS. VSS is stable and healthy, according to vssadmin. Exchange 2007 SP3 running on Windows Server 2008 R2, just last weekend I installed the latest Rollup for Exchange. I thought the nightly incrementals were large, but perhaps my users really are sending that much mail so I tested taking one incremental backup, waiting 10 minutes and taking a second. The second incremental backup transfered 5.8GB of data. We as an organization are absolutely NOT putting 5.8GB of data on the mail server every 10 minutes. Are there any other veeam users who have seen something similar? Is my test faulted? Are there other considerations for VSS?

    Read the article

  • Build a user's profile directory on creation in batch

    - by Moses
    I have a batch script that I use when I set up new Windows 7 PCs that creates a user based on a variable, creates a folder on their desktop, then shares it: @echo off SET /p unitnumber="Enter unit number: " net user unit%unitnumber% password /add /expire:never MD "C:/Users/unit%unitnumber%/Desktop/Accounting #%unitnumber%" runas /user:administrator "net share "Accounting#%unitnumber%"="C:/Users/unit%unitnumber%/Desktop/Accounting#%unitnumber"" I discovered that the share that is created is overwritten when the newly created user first logs on, because Windows creates builds their profile directory at that time. Is there any way to initiate a build of a user's profile directory in the batch file just after creating the it? The only thing that looks useful is the /homedir:pathname switch for the net user command, but I believe that option assumes the directory already exists. Other than that web research hasn't been fruitful. I'd be to use whatever to get this done as long as I can incorporate/launch it from the batch. Any suggestions?

    Read the article

  • backing up ntfs disk using rsync on ubuntu

    - by user70366
    For a long time I was using windows. I have a separate drive I use to keep copies of my media files, photos etc. on, which I periodically backup to an external drive. In Windows I used SyncToy to do this. After my Windows stopped booting, I decided to switch to Linux (Ubuntu 10.10). That seems to be going fine, but now I want to backup my drive to the external drive like before. Mostly the two drives will be already the same with maybe about 10GB of extra files added. So I try to use rsync to synchronise the two drives like this: rsync --dry-run -rvlt --modify-window=1 /media/Antonio1TB/Backup /media/FREECOM\ HDD/Backup The problem is the dry run indicates that every file on the drive will be copied. Not just the files I have recently added. What is the correct command to synch two NTFS drives under Ubuntu so that files that already exist don't get copied again? Thanks.

    Read the article

  • Time difference between servers after disaster recovery

    - by Sandokan
    We are running an old training system based on Windows Server 2003 and XP-clients. The solution is rather simple with four servers, two of them beeing DC:s. Everything is preconfigured and that goes for backup scheme as well. The backup software is Symantec BackupExec 2010. The scheme is a standard GF-F-S routine with full backups running once a week on Sundays. The other six days run differential backups. Now let's say in a worst case scenario, a server crashes on Saturday and we have to restore it from backup. The last backup will then be six days old and thus it will come online with six days old configurations. Will this pose a problem for the other servers or will the recovered server "get in line" eventually?

    Read the article

  • Backing up a Linux VPS with RSync to Vista

    - by Frank
    I've been working to setup a Linux VPS to host a couple of Wordpress sites and eventually a Mercurial server. I've setup one site and things have gone well. However, before I start moving other things to the VPS, I need to setup a backup solution. My provider, Linode, suggest RSync (among a couple of other options) to do backups. I've seen a few posts on this site that suggests other backup solutions including going to the Amazon Cloud but that costs money and the VPS is all the money I want to spend on this for the time being. So, to help solve that I want to have my backup computer be my home desktop computer. Assuming I'm using RSync, is it possible to use my Vista based home computer to become the destination for the backup? And if it is possible, what type of command or connection would I need to configure on the vista machine? Any insight would be helpful. It's probably obvious, but I've never used RSync.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >