Search Results

Search found 9620 results on 385 pages for 'backup profile'.

Page 12/385 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How do I create a read only MySQL user for backup purposes with mysqldump?

    - by stickmangumby
    I'm using the automysqlbackup script to dump my mysql databases, but I want to have a read-only user to do this with so that I'm not storing my root database password in a plaintext file. I've created a user like so: grant select, lock tables on *.* to 'username'@'localhost' identified by 'password'; When I run mysqldump (either through automysqlbackup or directly) I get the following warning: mysqldump: Got error: 1044: Access denied for user 'username'@'localhost' to database 'information_schema' when using LOCK TABLES Am I doing it wrong? Do I need additional grants for my readonly user? Or can only root lock the information_schema table? What's going on? Edit: GAH and now it works. I may not have run FLUSH PRIVILEGES previously. As an aside, how often does this occur automatically? Edit: No, it doesn't work. Running mysqldump -u username -p --all-databases > dump.sql manually doesn't generate an error, but doesn't dump information_schema. automysqlbackup does raise an error.

    Read the article

  • Backup / Disaster Recovery, should I store RAR-compressed files?

    - by moraleida
    I'm in the process of recovering files from an accidentally formated Ext4 partition using Photorec. It had about 300Gb of data, of which I've already got hold of about 30Gb. So far, it seems to me that the recovery of RAR-compressed files has been much more successful than the recovery of individual uncompressed files and ZIP compressed files - in the sense that a lot of recovered files/zips were unreadable, and pretty much all of the RAR files were intact. Is there such a relation? Are RAR-compressed files really less prone to corruption and thus easier to recover?

    Read the article

  • Time Machine for Windows

    - by Kevin L.
    A simple Google search for "Time Machine for Windows" results in a flurry of different little apps. But instead of relying on forum anecdotes and advertisements, I call on the much wiser Super User beta community for some depth on this one. Having Time Machine running on Leopard is like a warm, fuzzy blanket of comfort that I never got with RAID, rsync, or SyncToy on Windows. I'm not asking the community what the "best" backup software for Windows is, but instead: Is there any true Time Machine clone for Windows, one that includes as many of the following as possible: Completely transparent, "set-it-and-forget-it" backup Incremental backups (changes only) for every hour for a day, every day for a month, and every week until the backup disk is full Ability to rebuild from this backup disk in case of main drive meltdown (the backup doesn't have to be bootable; neither are Time Machine disks) Extremely easy to use UI (target user == wife). Bonus points for a beautiful UI

    Read the article

  • Enable roaming profile from group policy

    - by Rob Nicholson
    I've had a reasonable look around the AD policies but am I right in saying the only place that you can enable & define the group policy location is by editing the user, i.e. there isn't a group policy setting to (say) "Set the profile location to \myserver\users\%username%\profile" for all users in group XYZ? I suspect this might be because of chicken & egg, i.e. group policy is applied after the profile has been loaded and therefore can't specify the location. Cheers, Rob.

    Read the article

  • Local Profile Map to New Active Directory Login

    - by user42937
    Preface: I am sure this has been asked some where on the site before but I couldn't find any questions about it, or maybe I am not using the correct verbiage... Our admins are giving us a new active directory account on different domain. As I am a progammer (a member of IT) we are the group gets assigned new accounts first to test the migration. When I log in to my local machine using the new account I get a new local profile. Not the biggest deal, but on the new profile I am missing mappings, desktop items, wallpaper, etc. Our users I going to throw a fit if there is no way around this. Two Questions: I've seen references to NTUSER.DAT and suggestions to Copy all user files from "Documents and Settings", but is there a good way or is it even possible to associate my local profile with the new AD account? Is there any thing that our admins can do to prevent this from happening?

    Read the article

  • Why does this rsnapshot exclude not work?

    - by bstpierre
    Rsnapshot passes excludes directly to rsync, but rsync's behavior appears inconsistent. I've simplified my rsnapshot backup test to the following directory tree (this tree will be backed up): gorilla:~# find /tmp/snaptest -exec file {} \; /tmp/snaptest: directory /tmp/snaptest/SKIPTHIS: directory /tmp/snaptest/SKIPTHIS/xyz: directory /tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/snaptest/SKIPTHIS.txt: ASCII text My config file: config_version 1.2 snapshot_root /tmp/backup-media no_create_root 1 cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_ssh /usr/bin/ssh cmd_logger /usr/bin/logger cmd_du /usr/bin/du interval hourly 6 interval daily 7 interval weekly 4 interval monthly 3 verbose 3 loglevel 3 logfile /media/maxtor-one-touch/rsnapshot.log lockfile /media/maxtor-one-touch/backups/.rsnapshot.pid rsync_short_args -a rsync_long_args --delete --numeric-ids --relative --delete-excluded exclude "SKIPTHIS/**" link_dest 1 backup /tmp/snaptest snaptest The result: gorilla:~# rsnapshot -c /tmp/snaptest.conf hourly echo 12638 > /media/maxtor-one-touch/backups/.rsnapshot.pid mkdir -m 0755 -p /tmp/backup-media/hourly.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --exclude="SKIPTHIS/**" /tmp/snaptest \ /tmp/backup-media/hourly.0/snaptest touch /tmp/backup-media/hourly.0/ rm -f /media/maxtor-one-touch/backups/.rsnapshot.pid gorilla:~# find /tmp/backup-media/ -exec file {} \; /tmp/backup-media/: directory /tmp/backup-media/hourly.0: directory /tmp/backup-media/hourly.0/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp: sticky directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS.txt: ASCII text My confusion stems from the fact that if I copy-paste the rsync command echoed by rsnapshot, the SKIPTHIS directory is excluded! (I've tested with various other SKIPTHIS patterns with the same results.) Any idea what's going on?

    Read the article

  • SBS 2008 SP2 Backup - Volume Shadow Copy Operation Failed

    - by Robert Ortisi
    Server Setup Exchange 2007 Version: 08.03.0192.001 (Rollup 4) Windows Small Business Server 2008 SP2 (Rollup 5) Exchange set up on D: drive (449 GB / 698 GB Free) 80 GB / 148 GB Free on OS drive. Issue Backup Failure (VSS related) Backup Software Windows Server Backup (ver 1.0) Simplified Error Creation of the shared protection point timed out. Unknown error (0x81000101) The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. What I've tried Server Reboot. Updated Server and Exchange. ReConfigured Sharepoint (Helped resolve last vss error I encountered). registered VSS Dll's (Backups will sometimes work afterwards but VSS writers fail soon after). Tried Implementing Hotfix: http://support.microsoft.com/kb/956136 Tried Implementing Hotfix: http://support.microsoft.com/kb/972135 I left it for a few days and a few backups came through but then began to fail again. Detailed Information Log Name: Application Source: VSS Date: 16/11/2011 8:02:11 PM Event ID: 12341 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Event Xml: 12341 3 0 0x80000000000000 1651049 Application SERVER.DOMAIN.local 43 \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ ================================================================================= Log Name: System Source: volsnap Date: 16/11/2011 8:02:11 PM Event ID: 8 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Event Xml: 8 2 0 0x80000000000000 987135 System SERVER.DOMAIN.local ================================================================================== Log Name: Application Source: Microsoft-Windows-Backup Date: 16/11/2011 8:11:18 PM Event ID: 521 Task Category: None Level: Error Keywords: User: SYSTEM Computer: SERVER.DOMAIN.local Description: Backup started at '16/11/2011 9:00:35 AM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348001'. Please rerun backup once issue is resolved. Event Xml: 521 0 2 0 0 0x8000000000000000 1651065 Application SERVER.DOMAIN.local 2011-11-16T09:00:35.446Z 2155348001 %%2155348001 ================================================================================== Writer name: 'FRS Writer' Writer Id: {d76f5a28-3092-4589-ba48-2958fb88ce29} Writer Instance Id: {ba047fc6-9ce8-44ba-b59f-f2f8c07708aa} State: [5] Waiting for completion Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {0aace3e2-c840-4572-bf49-7fcc3fbcf56d} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {054593e2-2086-4480-92e5-30386509ed1b} State: [1] Stable Last error: No error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {840e6f5f-f35a-4b65-bb20-060cf2ee892a} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {9486bedc-f6e8-424b-b563-8b849d51b1e1} State: [1] Stable Last error: No error Writer name: 'BITS Writer' Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0} Writer Instance Id: {29368bb3-e04b-4404-8fc9-e62dae18da91} State: [1] Stable Last error: No error Writer name: 'Dhcp Jet Writer' Writer Id: {be9ac81e-3619-421f-920f-4c6fea9e93ad} Writer Instance Id: {cfb58c78-9609-4133-8fc8-f66b0d25e12d} State: [5] Waiting for completion Last error: No error ==================================================================================

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Free solution to backup folders to external SFTP server with shadow copy

    - by Sergiy Byelozyorov
    I have an account in university on Linux machine with 10TB of free space accessible via SFTP. I would like to backup my Windows 7 x64 laptop to university. Currently I am using rsync+cygwin, but backup is pretty slow (without shadow copy) and I hate console window appearing every day on my screen when I login. So I am looking for something like Windows Backup but with support for SFTP. Combination of tools will work too.

    Read the article

  • Backup IMAP mail, then access via IMAP again

    - by pauldoo
    I am looking for a tool to backup an entire IMAP account and then expose that backup (read-only) via IMAP again. This would be perfect for backing up email from any provider, and allowing the backup to be accessed from any mail client even years after closing the account. I suspect this could be achieved using a full blown IMAP server by configuring it to mirror some other server; but I am hoping for a simpler solution.

    Read the article

  • Backup software for Mac OS X

    - by Simone Carletti
    Which backup software do you recommend for Mac OS X? As you probably know, Leopard comes with an integrated backup tool called Time Machine. It works pretty well despite it misses some advanced restore/search features. Here's a list of backup software for Mac OS X: Time Machine (integrated) Carbon Copy Cloner (free) SuperDuper (commercial) iBackup (free) Do you know more? What software do you use and which feature can't you live without?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Backup to Synology NAS using rsync or NFS and hardlinks

    - by danilo
    I want to backup data from a Windows (Vista) computer to a Synology NAS (210j). The NAS supports FTP, SMB, NFS and also allows a rsync daemon to be set up. I want to backup different folders to the NAS, but I'd prefer to use the hardlink method to save diskspace (like this script does). With this method, a new folder is created for every backup, but if the file already exists on the target, only a hardlink is created. The filesystem on the Synology device is ext3, so I probably can't use rsyncbackup, as it is made for NTFS. Is there another way to do this backup with hardlink support?

    Read the article

  • How to backup a networked drive?

    - by nute
    I have a networked drive (Iomega Media Drive). To be safe in case the drives crashes, I've decided to buy an additional networked drive (WD MyBook World). Now, how do I backup one onto the other continuously? The WD drive came with a backup software (trial version, they didn't say that when i bought it), however it doesn't allow me to select a networked drive, only local drives. How do I backup a NETWORKED DRIVE ONTO A NETWORKED DRIVE? Thanks

    Read the article

  • Windows Backup (2008 R2) recovery and timezone

    - by GrZeCh
    Hello, does difference between timezones on Windows Server 2008 where backup was made and reovery console makes difference? Recovery console (wbadmin from command line too) is not finding any backup on local hard drive connected to server. Thanks EDIT: I'm working on Windows Server 2008 R2 EDIT2: This is not related to timezone. When I connected backup hard drive from Windows 2008 R2 Release Candidate recovery console runned from RTM system version DVD found stored backups from it without problems.

    Read the article

  • Virtualizor + VPS Backup (Bare Metal Restore capable) Using rSync 3

    - by Gaia
    I am using virtualizor to manage 3 XEN VPS. Hardware node and each VPS run CentOS 5.x. My backup needs are as follows: 1) I need to be able to bare metal restore the entire hardware node, excluding the VPSes (which would be restored via #2 below) 2) I need to have a complete backup of each VPS, ideally a backup that can be deployed on any other host that uses Xen, if the need arises. Naturally, I would also need to use this backup to restore an entire VPS to an earlier state within the same host. Which folders rSync needs to keep backed up in order to accomplish the above? The rSync specialists aren't sure of it either. Thanks

    Read the article

  • Configuring Backup Exec 2012 using USB hard drives as media

    - by SydxPages
    I have found some information on this but have not found the exact answers to these questions. Background I have installed backup exec 2012 (with agents for databases) I have configured a storage pool, with 2 USB drives (1TB) The backups are configured to backup to one of the 2 drives (depending on which one is connected) I have 2 questions: How do I get Backup Exec to tell me which disk to insert? I have used tapes before and it told me then which tape to use? I was hoping this was available for disks too. (Whilst there are only 2 at the moment, there will be more). And then how do I get Backup Exec to delete old backups when the disk if full.

    Read the article

  • SBS 2011 backup

    - by Chris
    I have a freshly installed SBS 2011 server that I need to configure for backup. I tried using the SBS backup configuration tool, but it didn't want to use anything but an external drive. Previously, with our W2K3 servers, I used NTBackup to back up the server to disk and then copied the backup files to a remote server on a regular basis. It doesn't appear this is possible with the built-in backup tools in SBS 2011. Am I missing something? What other options are there that won't cost an arm and a leg?

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Debian server doesn't free memory after backup

    - by stan31337
    I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Unfortunately I don't have enough rep to add image here. So here's a link:

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • IMAP mailboxes Backup script...

    - by Paul Stevens
    Hi guys, I need some advise to create a script that runs at night on my IMAP mail server, and backup all the mailboxes (50) and once the backup is done, burn such file in a DVD. What could be the best procedure to accomplish an IMAP backup and what tools should I use?.. Thanks

    Read the article

  • Backup and restore Subversion user permissions

    - by Earth Engine
    We use svnsync to create fully functional backup servers, and we have a script to do so. However if we wanted to create a new backup server, we have to copy the htpasswd and groups.conf file across (that is not hard) and (after running svnsync) manually assign the user/group to repositories. Also, if we change the assignment in the main server, there is no easy way to apply that change to all backup servers. Since we have 50+ projects and 30+ users this is a boring and error-pond exercise. Are there any tools that can help us to backup and restore those automatically? We are using VisualSVN under Windows, so it is better to have solutions in Windows scripts, not shell scripts.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >