Search Results

Search found 7545 results on 302 pages for 'backup and restore'.

Page 108/302 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • RRAS DNS Entries from Windows Vista / 7 Clients

    - by Christopher
    How do I stop a Win 2003 RRAS server from sending it's own DNS info to the VPN Client? We have RRAS running on Win 2003 Server. The server has a fixed IP, but the RRAS is setup to use DHCP for assigning VPN client IPs. Our DHCP is setup to send 4 DNS server entries in this order: Internal DNS Server Backup Internal DNS Server External DNS Server Backup External DNS Server Here's the thing: the RRAS server seems to automatically send it's own DNS entries (from it's NICs) to the client first, and then the entries from DCHP are applied. But since the RRAS server has Internal DNS and Backup Internal DNS as it's own DNS entries, it sends these first, and when the DCHP DNS entries come down, only the ones not already added get added (just the externals). This results in the following DNS list on the VPN client: External DNS Server Backup External DNS Server Internal DNS Server Backup Internal DNS Server This is no good of course, because internal names will no longer resolve. How do I stop the RRAS server from sending it's own DNS info to the VPN Client? Note this doesn't seem to happen on WinXP - it gets the DNS servers direct from the DHCP in the correct order.

    Read the article

  • How do I hide "Extra File" and "100%" lines from robocopy output?

    - by Danny Tuppeny
    I have a robocopy script to back up our Kiln repositories that runs nightly, which looks something like this: robocopy "$liveRepoLocation" "$cloneRepoLocation" /MIR /MT /W:3 /R:100 /LOG:"$backupLogLocation\BackupKiln.txt" /NFL /NDL /NP In the output, there are a ton of lines that contain "Extra file"s, like this: *EXTRA File 153 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fdt *EXTRA File 12 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fdx *EXTRA File 128 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fnm *EXTRA File 363 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.frq *EXTRA File 13 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.nrm Additionally, there are then hundreds of lines at the bottom that contain nothing but "100%"s, like this: 100% 100% 100% 100% 100% 100% 100% In addition to making the log files enormous (there are a lot of folders/files in Kiln repos), it makes it annoying to scan through the logs now and then to see if everything was working ok. How do I stop "Extra Files" appearing in the log? How do I stop these silly "100%" lines appearing in the log? I've tried every combination of switch I can think of (the current switches are listed above in the command), but neither seem to hide these!

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • preparing to migrate MOSS 2007 SP1 content into new server with MOSS 2007 SP2 with the same server n

    - by Albert Widjaja
    To All Experts here, I'd like to perform a MOSS server content migration from the existing single instance server (SQL Server 2008, Windows Server 2008, SharePoint 2007 SP1) all in one and by maintaining the server name only, here's what I had in my migration plan document outline: I have created the new Windows Server 2008 R2 + SQL Server 2008 SP1 and MOSS 2007 SP2 after that: Backup MOSS_ContentDB from SQL Server 2008 SSMS (does this enough to cover all top level sites and its content in the library and doc. repository ?) Backup all top level sites from CA site using the CA Sites backup and restore tools. turn off the old MOSSDEV01 (current server) rename the existing server (TempServ01) into MOSSDEV01 (so that the user bookmark and other link inside MOSS site still working) perform restore of the DB restore the site collection and its content from the backup is that the correct way to do it ? I haven't execute the server configuration wizard to create the same port number of the SSP, CA and the SQL Server DB yet. Any idea and suggestion would be greatly appreciated Thanks.

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • PHP vs Batch file for mysql cronjob?

    - by mysqllearner
    Hi, My server details: OS: Windows Server 2003 IIS6 Plesk 8.xx installed (currently using Plesk to set the cronjob) I need your advice. I have 2 methods: Method 1: Using php + mysqldump, create databases backup files into gzip, and then send email with attachment (each databases has around about 25mb) Method 2: Using batch + mysqldump, create databases backup files into gzip, and then send email with attachment (same, each databases has around about 25mb) My questions: Whats the difference of using php file and batch file for cronjob? Which method is better in term of backup speed and send email, and (maybe)safety (e.g., lesser file corrupt occurance)? If i set the cronjob hourly, will it effect my web performances? I mean, lets say my website has 100++ users online now, and each user making transaction to MySQL, when I perform backup at my web peak hour, will it decrease the performances, like the loading speed, prone to errors etc?? (sorry for my bad english) P.S: If you need my php and batch file code, please ask me to post it here. I didnt post it now is because, its very simple and standard code.

    Read the article

  • stsadm crash after sps07 to sps10 upgrade

    - by elhombre
    Hi all I have made an upgrade from sps07 to sps10 and I am trying now to make backups of sps10 using the command stsadm -o backup -directory c:\backup -backupmethod full The problem that Occurs is that stsadm crashes when trying todo the backup. Even worse is that when I try just to start stsadm from the command line I get following error. Unhanded exception: System.MissingMethodException: Methode not found: "Void Microsoft.SharePoint.SPRequestManager.Dispose()".at Microsoft.SharePoint.StsAdmin.SPStsAdmin.Main(String[] args) Holy moly what is happening?!

    Read the article

  • Understanding Windows 8 Recovery options

    - by stuffe
    Background: I am preparing a PC that I am sending to a relative abroad, who has little or no internet access, and next to no sensible options for getting IT support should anything go wrong. As such I am trying to provide a full set of recovery options such that they are able to reinstall the OS with minimum fuss or assistance if required. The PC is a brand new Acer laptop that came with Windows 7 pre-installed (and an associated recovery partition) and a free upgrade to Windows 8. I have installed Windows 8 from scratch performing a format and clean install from media I burned from the official download. The existing Windows 7 recovery partition is still there, and I can still boot from it. I have created recovery DVDs of that in case it is ever lost. Here are my recovery options so far. I can perform a factory reset of Win 7 via the recovery partition I can perform a factory reset of Win 7 via burned recovery DVDs I can re-install Windows 8 cleanly from a DVD All of these are useful, but not what I want, because the first 2 methods use Win 7, and still fill the machine with crapware, and the latter doesn't provide for any post-install customisation and software installation. So, I am looking to see what other options are available to perform a Windows 8 recovery that will be more than a simple install. I am aware that Win8 comes with some useful refresh tools: Refresh your PC - Re-install Win 8 over the top of your existing installation, recovering from any Windows corruption etc. I can run this from my current install, although it says some files are missing that will be provided by me install or recovery media, which seems to be code for stick your install DVD in, and it starts after I do that - unfortunately for this particular laptop you need to specify a particular WIFI driver or the install bombs out part way through with IRQL errors, and this refresh method skips the part where you can load a driver, so it's no use to me. I think I can fix this by creating a custom recovery image using the recimg.exe command but it takes hours to complete so I haven't tried it yet. Reset your PC - Perform a full install and lose all your files. Again it needs my Install media inserting before it will do anything, but then it provides an error (will include later when I recreate it...) Now, these recovery options look useful (in principal, although both are fail for me) but they rely on having a working system to access the tools, which leads me to the last option, of making a Recovery USB drive. I have made a recovery drive, and it should perform loads of useful things, including copying my WIN7 recovery partition to the drive, providing the above refresh and reset options, providing other troubleshooting options and also the ability to restore from a custom image, only none of them seem to work for me. Creating the Recovery Drive - the option to include my recovery partition is greyed out. The partition exists and works fine, why will it not copy it? Refresh - I imagine this would have the same issues as I described before, but this is moot because when I try it says that the "drive where Windows is installed is locked, please unlock the drive and try again" with no info on what that means and how to do it. Restore - Again, probably pointless as I can just use the DVD, but it also errors: "unable to reset your PC. A required drive partition is missing" System Restore - should let me roll back a bad driver etc as per normal in Windows, only it simply says "To use system restore you must specify which windows installation to restore. Restart this computer, select an operating system, and select system restore" ?!?! System Image Recovery - this seems to be offering to restore from a Windows system image, but this is deprecated in Windows 8, although you can still make one if you use the Windows 7 Backup tools, however the resultant file is too large to put on the USB stick as it's FAT formatted, and would be a massive stack of DVDs anyway. So useless. It would be nice it it would work with the custom recovery image you can use with the refresh command, but there seems no option to do this. Automatic Repair - some diagnostics, which seem useless as it happily tells me it can't fix my problem, even though I have none. Command Prompt - yay, this works! What on earth do I want to use it for... Had any of the above worked, it might be useful, but as any form of install still requires you to have the DVD, and any form of custom recovery image also requires you to have either a massive stack of DVDs or an NTFS formatted backup device in addition to the recovery drive, it sort of ruins the point. It doesn't seem rocket science. I want to create a bootable USB drive that I can refresh Windows over an existing install with, perform a clean reinstall to a bare system, or recovery a customised image with existing apps installed. If anyone can point me in a direction that allows me to make a single recovery drive do these all these things, I would appreciate it. I have a 32Gb USB3 thumb drive that I bought for this very purpose, but it's seems to be fighting to let me do anything useful. At this rate I will be making a DriveImageXML recovery stick and dumping the OS with that, which I know works, but isn't so elegant as using the proper tools..

    Read the article

  • How to back up server with rsync, preserving ownership/permissions without root login

    - by olilarkin
    I am setting up a backup server on which I want to run rsync over ssh to backup content on other servers every night. I would like to set up ssh keys to make it password-less, but I want to preserve ownership of files and permissions. There are a number of users on the server to be backed up which won't all exist on the backup server. What would be the best way to do this? I guess the backup job will need to connect as root to , but I don't want to enable root ssh access on the servers. thanks for any tips, oli ps, all servers are running UBUNTU Server 12.04 LTS and are behind a university firewall.

    Read the article

  • Report of a user's membership in groove spaces?

    - by Jeremy
    Hi All, I want to find out whether there is a way in Microsoft Groove to find out which spaces a user is in (and conversely which spaces the user is not in). We run a free script called Personal Backup for Groove for our backups. The script dumps out all the groove spaces that our "backup user" is a member of. However, if someone creates a new space and doesn't invite the backup user, that space will never get backed up. We're trying to find a way to audit the "backup user" membership so that we can ensure that it's invited to all spaces. Thanks!

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • Bacula not backing up all the files it should be doing

    - by Nigel Ellis
    I have Bacula (5.2) running on a Fedora 14 system backing up several different computers including Windows 7, Windows 2003 and Windows 2008. When backing up the Windows 2008 server the backup stops after a relatively small amount has been backed up and says the backup was okay. The fileset I am trying to backup should be around 323Gb, but it manages a mere 27Gb before stopping - but not erring. I did try creating a mount on the Fedora computer to the server I am trying to backup, and Bacula managed to copy 58Gb. When I tried to use the mount to copy the files manually I was able to copy them all - there are no problems with permissions etc. on the mount. Please can anyone give a reason why Bacula would just stop? I have heard there is a 260 character limit, but some of the files that should have been copy resolve to a shorter filename than some that have been backed up.

    Read the article

  • rsync - Exclude files that are over a certain size?

    - by Rory
    I am doing a backup of my desktop to a remote machine. I'm basically doing rsync -a ~ example.com:backup/ However there are loads of large files, e.g. wikipedia dumps etc. Most of the files I care a lot about a small, such as firefox cookie files, or .bashrc. Is there some invocation to rsync that will exclude files that are over a certain size? That way I could copy all files that are less than 10MB first, then do all files. That way I can do a fast backup of the most important files, then a longer backup of everything else.

    Read the article

  • OS X wants to back up Macintosh HD to itself

    - by slhck
    From time to time, OS X (10.6) will ask me if I want to perform a backup of my Macintosh HD, but the backup destination is obviously the HD itself. I have not plugged in any external device, nor is there a Time Capsule network share. A message like this will appear: Moreover, the Macintosh HD will suddenly have a Time Machine icon in Finder. Has somebody experienced the same and could help me identify how to stop OS X from using its own disk as the backup destination?

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • Simple one-way synchronisation of user password list between servers

    - by Renaud Bompuis
    Using a RedHat-derivative distro (CentOS), I'd like to keep the list of regular users (UID over 500), and group (and shadow files) pushed to a backup server. The sync is only one-way, from the main server to the backup server. I don't really want to have to deal with LDAP or NIS. All I need is a simple script that can be run nightly to keep the backup server updated. The main server can SSH into the backup system. Any suggestion? Edit: Thanks for the suggestions so far but I think I didn't make myself clear enough. I'm only looking at synchronising normal users whose UID is on or above 500. System/service users (with UID below 500) may be different on both system. So you can't just sync the whole files I'm afraid.

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • How can I check whether a volume is mounted where it is supposed to be using Python?

    - by Ben Hymers
    I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!

    Read the article

  • Bash loop to move directories on a remote host via ssh

    - by I Forgot
    I'm trying to figure out a way to perform the following loop on a remote host via ssh. Basically it renames a series of directories to create a rotating backup. But it's local. I want it to work against directories on a remote host. while [ $n -gt 0 ]; do { src=$(($n-1)) dst=$n if [ -d /backup/$src ]; then { mv /backup/$src /backup/$dst; } fi; } ((n--)) done;

    Read the article

  • Win32: edit control selection in dialog-based app

    - by Lars Kanto
    I have a dialog-based app with an edit-control in it. When I minimize / restore the app, everything's ok. But when I hide all the windows with holding down that Windows-logo-key and pressing "D" and then I restore the app, the edit-control selects everything inside it. How to make it not to select the text on restore?

    Read the article

  • Time machine link error

    - by robinjam
    When attempting to perform a Time Machine backup, the backup appears to proceed as normal, until it finishes copying files at which point it complains that an "error occurred while linking files for" one of my external hard disks. During the previous backup that particular disk was in fact empty, and therefore I can't understand why Time Machine is attempting to link back to it. But alas. I've verified all my disks using Disk Utility and they all appear to be fine. Does anybody know what causes this error, and how I might go about fixing it? Failing that is there a way to force Time Machine to create a brand new backup rather than an incremental one? Thanks in advance!

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >