Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 113/245 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Applescript create event in calendar, how do I remove the default alert?

    - by zero0cool
    Running 10.8 Mountain Lion, I'm trying to create a new event with Applescript like this: set theDate to (current date) tell application "Calendar" tell calendar "Calendar" set timeString to time string of theDate set newEvent to make new event at end with properties {description:"Last Backup", summary:"Last Backup " & timeString, location:"To a local unix system", start date:theDate, end date:theDate + 15 * minutes, allday event:false, status:confirmed} tell newEvent delete every display alarm delete every sound alarm delete every mail alarm delete every open file alarm end tell end tell end tell However, this does not remove the default Calendar alert which one can set through Calendar preferences (30 minutes prior in my case). How do I create an event with no alarms at all through Applescript?

    Read the article

  • limit the speed of writing files to NFS

    - by xgwang
    CentOS 5.6 NFS is mounted on the server for backup disk space. When the backup job started, it could reach 80MB/s and we really do not expect it took so much bandwidth. So i need to find a way to limit the speed of writing to NFS. I tried rsync with --bwlimit=5000. However, it did limit the reading speed, but the accumulated data still was written at 80MB/s, and no writing activities for seconds. Is there any way to limit the writing speed of NFS?

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • Get-Mailbox not returning all mailboxes

    - by rotard
    I am trying to set up an exchange mailbox backup job with Vembu Storegrid and StoreGrid is unable to list the mailboxes for the client. While I was troubleshooting the issue, I did notice another thing: running the Get-Mailbox command on the mail server as the backup user only shows the mailbox for that account, while running Get-Mailbox as my admin account returns a list of what appears to be all the mailboxes. My service account is a member of "Administrators", "Domain Admins", and "Domain Users". What additional permissions might be required to list all mailboxes in the system?

    Read the article

  • Reducing storage cost by moving old files to external USB HDDs. Your thoughts?

    - by cparker4486
    I've got about 300GB of pictures and marketing data that is rarely accessed and I'd like to get it off my main storage. I was thinking to simply add two external USB HDDs to the server and move all the files to one of the drives. The second drive would be the backup destination for the first drive. I'm working with Server 2003 R2 SP2. This will help me free a good amount of space on my main storage as well as reduce the complexity, backup window, and usage of my backups to tape.

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • Windows Home Server restore causes computer to be removed from the domain?

    - by unknown (google)
    I restored my Dell M4400 that is a company laptop, and now I get an error when I try to log on and am connected to our corporate network, which says that the domain controller could not be found or that the computer is not part of the domain. Everyone else can log on, so it seems my computer is no longer part of the domain, even though it thinks it is per the settings. One thing of note: my computer crashed on 1/14/10, but I restored from a backup that was made on 12/20/09. So I am not sure if that made a difference? Also, I tried running "gpupdate" to update my group policy, but that did not seem to help. Any ideas? Seems like a bit of a flaw in the backup system for computers that are part of a domain. I guess I wanted to hear from someone with more knowledge about how a computer is recognized as part of a domain to know if this should be expected when doing a restore or if I should file a trouble ticket.

    Read the article

  • Snapshot/Save GPU Drivers

    - by ashes999
    Since I'm running XP/32-bit, my GPU drivers are quite fragile. I've spent several hours trying to back up and restore from old versions, on at least two separate occasions. Writing down the device drivers is not enough. In the short term, I would like to somehow save, zip, backup, snapshot, or something so that if I need to reinstall my OS in the short-term, I have a reliable way to get the drivers. ATI's website doesn't have the install kit anymore, and I don't have it saved; I googled, but didn't find the exact same version. How can I backup/save my drivers so that I can reinstall them later?

    Read the article

  • Data transfer is extrem slow after partitioning extern usb drive

    - by user125912
    I bought an extern usb 3.0 drive with 500 gb capacity. OS is Windows 7. I use it with an usb 2.0 slot, no prob. Initially I used it without making several partitions and it was fast as hell. Then I had the great idea to make partitions, one for programs, one for data and one for backup. I chose the free EASEUS Partition Master 9.1.1. and ended up with these partitions: F:Apps, primary, NTFS, 100gb H:Data, logic, NTFS, 250gb B:Backup, logic, NTFS, 150gb THE PROBLEM: When I copy files from C: to F: I get a transfer rate of about 100 KB/S ! When I copy files from C: to H: I get a transfer rate of about 4 MB/S ! thats all muuuch to slow, slower then before. What can I do to speed the shit up? Thanks in advance!

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • Autosaving on emacs or xemacs files (preferably on loss of focus)

    - by Spencer
    Ideally I want to replicate with emacs functionality from TextMate, whereby on loss of focus i.e. I click away from the buffer, my file saves. If this isn't possible, I want to customize emacs so that it will autosave the file for every character I write. When I say this I don't mean I want to autosave to the ~ backup files. I want to save the file I am currently working on. I am working on a Fedora VM. Note I am not looking for a backup or autosave. I want the file I am actually in to save, so that if I loaded the html file I am editing in a web browser it would reflect my new changes without me having to explicitly change it.

    Read the article

  • Transaction log is full and does not free up space

    - by titanium
    Hi, I have a database in SQL Server 2005 whose transaction log becomes full. It is using snapshot replication. I noticed the transaction log is not freeing up space. So I created an additional transaction log. Three days has passed and this first transaction log is still full. I performed a full database backup and transaction backup. Then I tried to shrink the transaction log but the shrink failed. Can anyone advise why shrinking transaction log is failing? ANy other recommendation on how to resolve the problem?

    Read the article

  • Error code 2503 - Cannot install software on Windows 7 (64Bit)

    - by SixfootJames
    A short while ago, I had my hard drive die on me and at the same time my 1Tb backup drive! I took it back to the guy I bought the PC from and although the backup drive could not be recovered, he managed to get my machine working again by making a minor change in the BIOS which then got it out of that continuous loop it found itself in after multiple BSOD episodes. Everything seems to be working fine but yesterday when I tried to save something from Google Chrome, I got and insufficient permission problem and when I try to install software, I get an error of 2503. I have already followed the suggestions here but none of this worked for me. Any suggestions would be appreciated. EDIT: This started happening after I tried running a number of tests to get the machine working, including a previous restore point.

    Read the article

  • Nagios service active only when other service is failing

    - by Laimoncijus
    Is is possible to define service to be active only the times while other service is failing? Consider following example: 2 hosts available: HostA (primary) and HostB (backup). Nagios service, which monitors amount of active connections to the host: gives OK when amount of connections to host 0 gives FAILURE when amount of connections to host = 0 If setup nagios service to monitor both hosts: HostA and HostB - it will give me OK for HostA (while it is primary and all connections normally goes to it) and FAIL for HostB (while it is backup and will receive no connections while HostA is alive). Can I make the nagios service for HostB somehow depend on sevice of HostA and give no failures (or maybe be inactive) up to the moment the service of HostA starts failing?

    Read the article

  • How to efficiently restore Library folder partially deleted on OS X

    - by flow
    I am using OS X Lion, and trying to delete some files I did accidentally (from home directoy): rm -fr Library I realized about this some 15 seconds later and did killall rm Some folders have been deleted, of course, inside "Library". Now the system seems to be ok, but I fear what will happen in case of reboot. I have a Time Machine backup from 5 days ago. I wonder if it would be a good solution, just to copy whole "Library" folder from my home directory from backup and replace this one. Or, what do you think would be the best approach? PS: In order to restore just deleted directories inside "Library", in which order does "rm" start to delete directories, alphabetically?

    Read the article

  • How can I boot a vm on Hyper-V 2012 when it has a virtual hard-drive missing?

    - by Zone12
    We have a Hyper-V 2012 server with 8 VM's on. We have attached extra virtual hard-drives to each of the computers to store backups on. These drives are stored on a NAS. After a power failure, we tried to boot the VM's and found that they couldn't be booted without the attached backup drives. We couldn't boot the NAS at that point and so we had to remove all the extra drives manually, boot the VM's and re-attach the drives at a later date when we got the NAS back up and running. These backup drives are non-essential to the running of the system. I would like to know if there is a way to boot a VM on Hyper-V 2012 with some of the hard-drives (scsi) missing so that we can recover automatically from a power failure.

    Read the article

  • Reading S.M.A.R.T. statistics on an ESXi?

    - by leeand00
    Is it possible to read S.M.A.R.T. statistics on harddrive in an VMWare ESXi? When I did a backup last night I received an error message that didn't really seem to indicate if the error came from the local drive I was backing up to, or the remote ESXi Virtual Machine's E:\ drive I was backing up from caused the error. When this happened I ran chkdsk on the local drive and on the remote virtual drive. It seems like it worked, I'm no longer getting the error, but if it is something serious, I'd like to know about it before the drive fails. I've already hooked up the backup drive to my system so I can read the the S.M.A.R.T. statistics, but I have no idea how one could do this on a ESXi.

    Read the article

  • RAID strategy - 8 1TB drives

    - by alex
    I'm setting up a backup storage device- This machine has Windows Server 2008, on a separate boot drive. It has 8x 1TB drives, and uses a hardware RAID card. My question is, which RAID configuration should I go for? Initially, I was going to go with RAID 5 across all 8 drives, however members on serverFault have advised against it. I was just wondering why? Some people have suggested 2 lots of RAID 5 configuration on 4 of the drives, then striping them... I want to maximise the storage space, as this is a backup unit - will store SQL backups, Acronis Images, files, etc... It won't be for public access, so the I/O won't be that high I wouldn't think.

    Read the article

  • rsync stuck with the --checksum option

    - by billc.cn
    I use back-in-time to backup my Linux installation. It serves as an advanced wrapper for the rsync command. Today I tried to add /var/log to the list of folders to be backed up and it caused some serious performance problems. The job seems to stuck on a particular file and the CPU usage of the rsync parent process reaches 100%. I then used lsof to see which file caused the problem and it seems to be the /var/log directory. I did some googling and some experiments with the different rsync options and found --checksum to be the offender. Without the parameter, an incremental backup finishes properly in minutes. With it, the process will stuck when rsync tries to sync a constantly changing log file. This kind of make sense, but it still seems to be a bug to me. Am I using the option correctly? Is there a workaround for this?

    Read the article

  • How can we recover/restore lost/overwritten data in our MSSQL 2008 table?

    - by TeTe
    I am in serious trouble and I am seeking professional advices here. We are using MSSQL server 2008. We removed primary key, replaced exiting data with new data resulted losing our critical business data in its child tables on MSSQL Server. It was completely human mistake and we didn't have disk failure. 1) The last backup file was a month ago which means it is useless. 2) We created Maintenance Plans to backup our database at 12AM everyday but those files are nohwere to be found 3) A friend of mine said we can recover from Transaction Logs. When I go to TaskRestore Transaction log is dimmed/disabled. 4) I checked ManagementMaintenance Plans. I can't find any restored point there. It seems that our maintenance plan hasn't been working. Is there any third party tool to recover lost/overwritten data from MSSQL table? Thanks a lot.

    Read the article

  • Migrating from Desktop PC to real Server

    - by tevlon84
    i am a student, working as a part-time Administrator at a startup. I never ever used a real Server ( only a Desktop Pc with apache ) The Company i am working for is growing and they want to switch to a real Server. My idea would be to use the Ubuntu build-in Backup function and use this Backup file as Base for the Rack-Server, but i don't know, which problems i would run into. Is it a good idea ? So basically my question is : *What is the easiest way to migrate from a Desktop PC to a real Rack-Server? ( on an Ubuntu Server) *

    Read the article

  • Use backups if unavailable (not just down)

    - by PriceChild
    Using haproxy, I want: A pool of 'main' servers and 'backup' servers, though they don't necessarily have to be in separate pools. Each backend has a low 'maxconn' (in this case 1) Clients should not wait in a queue. If there are no immediately available servers in the 'main' pool they should be shunted to the 'backup' pool without delay. Right now I have one backend, 'main' servers have an absurdly high weighting and it 'works'. acl use_backend + connslots is along the right lines but without the patch in my own answer it isn't perfect. Bonus points for not requiring a modified haproxy binary.

    Read the article

  • How to write more than one line in a launcher

    - by seraex
    How can I run three commands in a launcher? My commands are cd /home/seraex/MyDoc rm MyDoc.tgz tar cfz MyDoc.tgz * which will go to my documents folder and delete old backup and make a new backup. At the moment I make a text file and then make a launcher and point it to the file, but I want to delete the file and make the launcher run the commands directly. I'm using ubuntu 10.10 ' ubuntu site says 'Unfortunately launchers do not have access to the Bash environment so you cannot just include the multi commands' when i ggole chaining in launchers. thanks, admin may delete the question '

    Read the article

  • What is the best plan to handle server fault for google app engine [closed]

    - by lucemia
    I used google-appengine without preparing much backup plans before, but it looks like not a good idea anymore.... Since google app engine is quite hard to find a backup replacement, I plan to just add a "server error" page which will show while server fault. Currently I am thinking to: Use the cdn cloudfare in front of google app engine. It will also handle the NAME server for me. Prepare some static version of webpages (such as "Oops! the server fault") in another hosting platform While google app engine failed, I will switch the destination from google app engine to the static page by change the CNAME records on cloudfare. Is there any other recommand way to solve this situation?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >