Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 18/394 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Time Machine for Windows

    - by Kevin L.
    A simple Google search for "Time Machine for Windows" results in a flurry of different little apps. But instead of relying on forum anecdotes and advertisements, I call on the much wiser Super User beta community for some depth on this one. Having Time Machine running on Leopard is like a warm, fuzzy blanket of comfort that I never got with RAID, rsync, or SyncToy on Windows. I'm not asking the community what the "best" backup software for Windows is, but instead: Is there any true Time Machine clone for Windows, one that includes as many of the following as possible: Completely transparent, "set-it-and-forget-it" backup Incremental backups (changes only) for every hour for a day, every day for a month, and every week until the backup disk is full Ability to rebuild from this backup disk in case of main drive meltdown (the backup doesn't have to be bootable; neither are Time Machine disks) Extremely easy to use UI (target user == wife). Bonus points for a beautiful UI

    Read the article

  • What happens if an OpenStack cloud controller dies?

    - by magu
    I've been reading up on OpenStack and how we can re-create an EC2/S3-style cloud for our internal development and I'm having a hard time finding information on how the OpenStack cloud controller provides redundancy of the cloud management services. I know I can setup multiple Swift and Nova nodes, but not a single document/article/howto/wiki contains information on: a) what happens if the cloud controller node dies; and b) how to setup redundant cloud controllers. It seems to me that, although it is massively scalable, there is a big single-point-of-failure built into OpenStack. Can anyone with more experience on OpenStack please shed some light as to how it all works in regards to high-availability?

    Read the article

  • 4th International SOA Symposium + 3rd International Cloud Symposium by Thomas Erl - call for presentations

    - by Jürgen Kress
    At the last SOA & Cloud Symposium by Thomas Erl the SOA Partner Community had a great present. The next conference takes place April 2011 in Brazil, make sure you submit your papers. The International SOA and Cloud Symposium brings together lessons learned and emerging topics from SOA and Cloud projects, practitioners and experts. The two-day conference agenda will be organized into the following primary tracks: SOA Architecture & Design SOA & BPM Real World SOA Case Studies SOA & Cloud Security Real World Cloud Computing Case Studies REST & Service-Orientation BPM, BPMN & Service-Orientation Business of SOA SOA & Cloud: Infrastructure & Architecture Business of Cloud Computing Presentation Submissions The SOA and Cloud Symposium 2010 program committees invite submissions on all topics related to SOA and Cloud, including but not limited to those listed in the preceding track descriptions. While contributions from consultants and vendors are appreciated, product demonstrations or vendor showcases will not be accepted. All contributions must be accompanied with a biography that describes the SOA or Cloud Computing related experience of the presenter(s). Presentation proposals should be submitted by filling out the speaker form and sending the completed form to [email protected]. All submissions must be received no later than January 31, 2010. To download the speaker form, please click here. Specially we are looking for Oracle SOA Suite and BPM Suite Case Studies! For additional call for papers please visit our SOA Community Wiki.   For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Symposium,Cloud Symposium,Thomas Erl,SOA,SOA Suite,Oracle,Call for papers,OPN,BPM,Jürgen Kress

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Why does this rsnapshot exclude not work?

    - by bstpierre
    Rsnapshot passes excludes directly to rsync, but rsync's behavior appears inconsistent. I've simplified my rsnapshot backup test to the following directory tree (this tree will be backed up): gorilla:~# find /tmp/snaptest -exec file {} \; /tmp/snaptest: directory /tmp/snaptest/SKIPTHIS: directory /tmp/snaptest/SKIPTHIS/xyz: directory /tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/snaptest/SKIPTHIS.txt: ASCII text My config file: config_version 1.2 snapshot_root /tmp/backup-media no_create_root 1 cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_ssh /usr/bin/ssh cmd_logger /usr/bin/logger cmd_du /usr/bin/du interval hourly 6 interval daily 7 interval weekly 4 interval monthly 3 verbose 3 loglevel 3 logfile /media/maxtor-one-touch/rsnapshot.log lockfile /media/maxtor-one-touch/backups/.rsnapshot.pid rsync_short_args -a rsync_long_args --delete --numeric-ids --relative --delete-excluded exclude "SKIPTHIS/**" link_dest 1 backup /tmp/snaptest snaptest The result: gorilla:~# rsnapshot -c /tmp/snaptest.conf hourly echo 12638 > /media/maxtor-one-touch/backups/.rsnapshot.pid mkdir -m 0755 -p /tmp/backup-media/hourly.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --exclude="SKIPTHIS/**" /tmp/snaptest \ /tmp/backup-media/hourly.0/snaptest touch /tmp/backup-media/hourly.0/ rm -f /media/maxtor-one-touch/backups/.rsnapshot.pid gorilla:~# find /tmp/backup-media/ -exec file {} \; /tmp/backup-media/: directory /tmp/backup-media/hourly.0: directory /tmp/backup-media/hourly.0/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp: sticky directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS.txt: ASCII text My confusion stems from the fact that if I copy-paste the rsync command echoed by rsnapshot, the SKIPTHIS directory is excluded! (I've tested with various other SKIPTHIS patterns with the same results.) Any idea what's going on?

    Read the article

  • SBS 2008 SP2 Backup - Volume Shadow Copy Operation Failed

    - by Robert Ortisi
    Server Setup Exchange 2007 Version: 08.03.0192.001 (Rollup 4) Windows Small Business Server 2008 SP2 (Rollup 5) Exchange set up on D: drive (449 GB / 698 GB Free) 80 GB / 148 GB Free on OS drive. Issue Backup Failure (VSS related) Backup Software Windows Server Backup (ver 1.0) Simplified Error Creation of the shared protection point timed out. Unknown error (0x81000101) The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. What I've tried Server Reboot. Updated Server and Exchange. ReConfigured Sharepoint (Helped resolve last vss error I encountered). registered VSS Dll's (Backups will sometimes work afterwards but VSS writers fail soon after). Tried Implementing Hotfix: http://support.microsoft.com/kb/956136 Tried Implementing Hotfix: http://support.microsoft.com/kb/972135 I left it for a few days and a few backups came through but then began to fail again. Detailed Information Log Name: Application Source: VSS Date: 16/11/2011 8:02:11 PM Event ID: 12341 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Event Xml: 12341 3 0 0x80000000000000 1651049 Application SERVER.DOMAIN.local 43 \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ ================================================================================= Log Name: System Source: volsnap Date: 16/11/2011 8:02:11 PM Event ID: 8 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Event Xml: 8 2 0 0x80000000000000 987135 System SERVER.DOMAIN.local ================================================================================== Log Name: Application Source: Microsoft-Windows-Backup Date: 16/11/2011 8:11:18 PM Event ID: 521 Task Category: None Level: Error Keywords: User: SYSTEM Computer: SERVER.DOMAIN.local Description: Backup started at '16/11/2011 9:00:35 AM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348001'. Please rerun backup once issue is resolved. Event Xml: 521 0 2 0 0 0x8000000000000000 1651065 Application SERVER.DOMAIN.local 2011-11-16T09:00:35.446Z 2155348001 %%2155348001 ================================================================================== Writer name: 'FRS Writer' Writer Id: {d76f5a28-3092-4589-ba48-2958fb88ce29} Writer Instance Id: {ba047fc6-9ce8-44ba-b59f-f2f8c07708aa} State: [5] Waiting for completion Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {0aace3e2-c840-4572-bf49-7fcc3fbcf56d} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {054593e2-2086-4480-92e5-30386509ed1b} State: [1] Stable Last error: No error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {840e6f5f-f35a-4b65-bb20-060cf2ee892a} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {9486bedc-f6e8-424b-b563-8b849d51b1e1} State: [1] Stable Last error: No error Writer name: 'BITS Writer' Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0} Writer Instance Id: {29368bb3-e04b-4404-8fc9-e62dae18da91} State: [1] Stable Last error: No error Writer name: 'Dhcp Jet Writer' Writer Id: {be9ac81e-3619-421f-920f-4c6fea9e93ad} Writer Instance Id: {cfb58c78-9609-4133-8fc8-f66b0d25e12d} State: [5] Waiting for completion Last error: No error ==================================================================================

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Free solution to backup folders to external SFTP server with shadow copy

    - by Sergiy Byelozyorov
    I have an account in university on Linux machine with 10TB of free space accessible via SFTP. I would like to backup my Windows 7 x64 laptop to university. Currently I am using rsync+cygwin, but backup is pretty slow (without shadow copy) and I hate console window appearing every day on my screen when I login. So I am looking for something like Windows Backup but with support for SFTP. Combination of tools will work too.

    Read the article

  • Backup IMAP mail, then access via IMAP again

    - by pauldoo
    I am looking for a tool to backup an entire IMAP account and then expose that backup (read-only) via IMAP again. This would be perfect for backing up email from any provider, and allowing the backup to be accessed from any mail client even years after closing the account. I suspect this could be achieved using a full blown IMAP server by configuring it to mirror some other server; but I am hoping for a simpler solution.

    Read the article

  • Backup software for Mac OS X

    - by Simone Carletti
    Which backup software do you recommend for Mac OS X? As you probably know, Leopard comes with an integrated backup tool called Time Machine. It works pretty well despite it misses some advanced restore/search features. Here's a list of backup software for Mac OS X: Time Machine (integrated) Carbon Copy Cloner (free) SuperDuper (commercial) iBackup (free) Do you know more? What software do you use and which feature can't you live without?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Backup to Synology NAS using rsync or NFS and hardlinks

    - by danilo
    I want to backup data from a Windows (Vista) computer to a Synology NAS (210j). The NAS supports FTP, SMB, NFS and also allows a rsync daemon to be set up. I want to backup different folders to the NAS, but I'd prefer to use the hardlink method to save diskspace (like this script does). With this method, a new folder is created for every backup, but if the file already exists on the target, only a hardlink is created. The filesystem on the Synology device is ext3, so I probably can't use rsyncbackup, as it is made for NTFS. Is there another way to do this backup with hardlink support?

    Read the article

  • How to backup a networked drive?

    - by nute
    I have a networked drive (Iomega Media Drive). To be safe in case the drives crashes, I've decided to buy an additional networked drive (WD MyBook World). Now, how do I backup one onto the other continuously? The WD drive came with a backup software (trial version, they didn't say that when i bought it), however it doesn't allow me to select a networked drive, only local drives. How do I backup a NETWORKED DRIVE ONTO A NETWORKED DRIVE? Thanks

    Read the article

  • Windows Backup (2008 R2) recovery and timezone

    - by GrZeCh
    Hello, does difference between timezones on Windows Server 2008 where backup was made and reovery console makes difference? Recovery console (wbadmin from command line too) is not finding any backup on local hard drive connected to server. Thanks EDIT: I'm working on Windows Server 2008 R2 EDIT2: This is not related to timezone. When I connected backup hard drive from Windows 2008 R2 Release Candidate recovery console runned from RTM system version DVD found stored backups from it without problems.

    Read the article

  • Virtualizor + VPS Backup (Bare Metal Restore capable) Using rSync 3

    - by Gaia
    I am using virtualizor to manage 3 XEN VPS. Hardware node and each VPS run CentOS 5.x. My backup needs are as follows: 1) I need to be able to bare metal restore the entire hardware node, excluding the VPSes (which would be restored via #2 below) 2) I need to have a complete backup of each VPS, ideally a backup that can be deployed on any other host that uses Xen, if the need arises. Naturally, I would also need to use this backup to restore an entire VPS to an earlier state within the same host. Which folders rSync needs to keep backed up in order to accomplish the above? The rSync specialists aren't sure of it either. Thanks

    Read the article

  • Configuring Backup Exec 2012 using USB hard drives as media

    - by SydxPages
    I have found some information on this but have not found the exact answers to these questions. Background I have installed backup exec 2012 (with agents for databases) I have configured a storage pool, with 2 USB drives (1TB) The backups are configured to backup to one of the 2 drives (depending on which one is connected) I have 2 questions: How do I get Backup Exec to tell me which disk to insert? I have used tapes before and it told me then which tape to use? I was hoping this was available for disks too. (Whilst there are only 2 at the moment, there will be more). And then how do I get Backup Exec to delete old backups when the disk if full.

    Read the article

  • SBS 2011 backup

    - by Chris
    I have a freshly installed SBS 2011 server that I need to configure for backup. I tried using the SBS backup configuration tool, but it didn't want to use anything but an external drive. Previously, with our W2K3 servers, I used NTBackup to back up the server to disk and then copied the backup files to a remote server on a regular basis. It doesn't appear this is possible with the built-in backup tools in SBS 2011. Am I missing something? What other options are there that won't cost an arm and a leg?

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Debian server doesn't free memory after backup

    - by stan31337
    I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Unfortunately I don't have enough rep to add image here. So here's a link:

    Read the article

  • IMAP mailboxes Backup script...

    - by Paul Stevens
    Hi guys, I need some advise to create a script that runs at night on my IMAP mail server, and backup all the mailboxes (50) and once the backup is done, burn such file in a DVD. What could be the best procedure to accomplish an IMAP backup and what tools should I use?.. Thanks

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • Backup and restore Subversion user permissions

    - by Earth Engine
    We use svnsync to create fully functional backup servers, and we have a script to do so. However if we wanted to create a new backup server, we have to copy the htpasswd and groups.conf file across (that is not hard) and (after running svnsync) manually assign the user/group to repositories. Also, if we change the assignment in the main server, there is no easy way to apply that change to all backup servers. Since we have 50+ projects and 30+ users this is a boring and error-pond exercise. Are there any tools that can help us to backup and restore those automatically? We are using VisualSVN under Windows, so it is better to have solutions in Windows scripts, not shell scripts.

    Read the article

  • How best to backup 6x Win2k3 Servers

    - by saille
    We have a external HP LTO3 tape drive. It needs to backup 6 Windows 2003 machines every night. Servers are HP DL380 G3 and the tape drive is attached locally to one of them via SCSI. On a budget of $0, and a goal of keeping-it-simple, what is going to be the best way to backup these machines? What software to use? NT Backup? Or does HP have something better for free? We don't need image backups - file system + system state will be adequate. Do we need to copy the files to be backed up onto the machine with the tape drive attached? Edit: Let me ask a more focussed question: Would you use NT Backup or something else? No soap boxing please, we've after some quick advice from someone who's used a similar setup.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >