Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 68/230 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • sybase bcp import fails

    - by chromeplatedbanana
    We're trying to export some tables from our production database to our test database using bcp. The bcp export seems to work fine, but the import always fails with a data type error (see below). We tested on our test database exporting the table content to a file, then importing it in again immediately, but that failed too. e.g., bcp TABLENAME out ~/tempfile -S servername -U username generates a file as expected. If we use -c option then the number of lines is as expected. However, bcp TABLENAME in ~/tempfile -S servername -U username fails with CTLIB Message: - L0/0D/S0/N0/0/0: blk_int(): blk_layer: CT library error: Cannot find an equivalent CS_TYPE for this TDS data type 49 blk_init failed. We get this whenever we try to copy into TABLENAME, whether from the production or test table dump file. I don't understand why export and import for the same TABLENAME is generating a data type error. What am I doing wrong here? Thanks

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • export shared services from MOSS

    - by vittocia
    Hello, using the stsadm command I have been able to export a MOSS website and restore it on a different server which works fine. I tried the same for the shared services, it gave no errors, but it does not have all the import connections when I check around. Is there a better way to export and restore shared services, or a way to synch the import connections and user list?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • Easy Oracle Log-Shipping

    - by ItsAMystery
    Hi All I am looking for a decent way of keeping a secondary Oracle database up to date without exporting and importing the database each time. There are 3 users on the instance that I would essentially like to 'log ship' if thats what it is called on Oracle! Can anyone suggest anything? The database is well under a GB total and we are running 10g express (although I have thought about using 10g standard as we have a spare license). Cheers Chris

    Read the article

  • Automate uploading of videos to YouTube

    - by John
    Here's the problem: I would like to keep lots of home made videos. Of course, they are subject to being lost, or somebody could steal the the computer, or water or fire could destroy them. Secondly, I have to plug in my hard drive every time I want to watch something, which I find slow and cumbersome. I was thinking that perhaps I could upload the videos to Youtube with the privacy set to invite only and then delete the video from the hard drive automatically. Could this be done?

    Read the article

  • Windows Home Server style redundancy/multi-disk-support on Windows Server 2008 R2?

    - by user19597
    I'm setting up a fileserver for our department. It'll be connected to the domain. I want it to have a very large amount of storage (several TB). Ideally, it should also preserve disk space by identifying identical files and only storing them once. It should be fault tollerant so that if one of the drives fails, that drive can be replaced without losing any data. All of these features are available in Microsoft's consumer offering - Windows Home Server. However, I can't find these kind of features within the enterprise Windows Server 2008 R2. Am I missing something? I know that I could buy a Drobo, or similar, and use this instead. However, I would prefer to use a built-in feature of Windows Server should it exist. It seems surprising to me that these features should be available in Home Server but not in an enterprise fileserver.

    Read the article

  • LTO(4) tape shelf life estimation?

    - by emilp
    LTO tapes, Maxell in this case, are often marketed as having 30 years or more shelf life when stored under "optimal conditions" Is there a way to get a good estimation to the shelf life, given parameters such as relative humidity and temperature etc? Obsolescence of the tapes aside, is there a way of determining the impact to shelf life of any deviance from the optimal. In other words how many years are lost when storing say 1 degree above the specified range? regards Emil

    Read the article

  • Bacula v5.0.2 Windows Installation Issues

    - by JohnyD
    First off, I am very new to Bacula but I'm very intriqued from what I've read. I'm looking to set up Bacula 5.0.2 on a Windows 2008 R2 server. I've run the installer and at the end it asks me to configure DIR name, DIR password, DIR Address. Windows documentation is somewhat hard to come by and I'm not certain what exactly I'm supposed to enter here. Do I need to create a local account that matches this info? Will the installation process create the account for me? Will this be the account that handles the FD daemon/service? I'm also not certain if Address means network location or local direcory. I apologize for my ignorance. Currently I'm trying to use the following information: Name: john pass: john address: thin1 (server name although I have also tried thin1.fqdm.local and 10.0.0.104) This info allows for the installer to complete successfully. However, when I run the BAT it hangs at, "Connecting to Director thin1:9101". The Bacula File Service is currently running under the local system account. What am I doing wrong? What do I have yet to do? Once I get this working properly I assume I will need to install clients on all my Windows boxes? Also, this is a 64-bit cpu but I am installing the 32-bit client. Are there any issues with this? Should I be using the 32-bit client? Thanks very much for the help.

    Read the article

  • Ubuntu: Move fsbackup backups to Amazon S3

    - by Alexander Gladysh
    I have a legacy server (Ubuntu 9.10 Karmic x86), where previous admin set up backups with fsbackup. This server lives in a VPS (under some kind of Xen), and it is low on HDD space (16 GB total). Now it came to a point, where fsbackup backups take more space than the rest of data in the system. The filesystem is 100% filled, and I already cleaned up all that I could, aside from actual backups. I do not have any experience managing fsbackup, and I do not want to break or lose the backups. Googling fsbackup gives surprisingly low quality results... Here is how my backups look like: $ sudo ls -lh /var/archives total 8.1G -rw-rw---- 1 root root 318 2011-01-06 06:26 myserver-20110106.md5 -rw-rw---- 1 root root 258 2011-01-07 06:26 myserver-20110107.md5 -rw-rw---- 1 root root 318 2011-01-08 06:26 myserver-20110108.md5 -rw-rw---- 1 root root 318 2011-01-09 06:26 myserver-20110109.md5 -rw-rw---- 1 root root 346 2011-01-10 06:43 myserver-20110110.md5 -rw-rw---- 1 root root 14M 2011-01-06 06:26 myserver-all-mysql-databases.20110106.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-07 06:26 myserver-all-mysql-databases.20110107.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-08 06:26 myserver-all-mysql-databases.20110108.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-09 06:26 myserver-all-mysql-databases.20110109.sql.bz2 -rw-rw---- 1 root root 862 2011-01-10 06:43 myserver-all-mysql-databases.20110110.sql.bz2 -rw-rw---- 1 root root 827K 2011-01-03 06:25 myserver-etc.20110103.master.tar.gz -rw-rw---- 1 root root 16K 2011-01-06 06:25 myserver-etc.20110106.tar.gz -rw-rw---- 1 root root 16K 2011-01-07 06:25 myserver-etc.20110107.tar.gz -rw-rw---- 1 root root 16K 2011-01-08 06:25 myserver-etc.20110108.tar.gz -rw-rw---- 1 root root 16K 2011-01-09 06:25 myserver-etc.20110109.tar.gz -rw-rw---- 1 root root 827K 2011-01-10 06:25 myserver-etc.20110110.master.tar.gz -rw------- 1 root root 36K 2011-01-10 06:25 myserver-etc.incremental.bin -rw-rw---- 1 root root 29M 2011-01-03 06:25 myserver-home.20110103.master.tar.gz -rw-rw---- 1 root root 11K 2011-01-06 06:25 myserver-home.20110106.tar.gz -rw-rw---- 1 root root 14K 2011-01-07 06:25 myserver-home.20110107.tar.gz -rw-rw---- 1 root root 11K 2011-01-08 06:25 myserver-home.20110108.tar.gz -rw-rw---- 1 root root 11K 2011-01-09 06:25 myserver-home.20110109.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-10 06:25 myserver-home.20110110.master.tar.gz -rw------- 1 root root 27K 2011-01-10 06:25 myserver-home.incremental.bin -rw-rw---- 1 root root 1.5G 2011-01-03 06:29 myserver-opt.20110103.master.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-06 06:25 myserver-opt.20110106.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-07 06:25 myserver-opt.20110107.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-08 06:25 myserver-opt.20110108.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-09 06:25 myserver-opt.20110109.tar.gz -rw-rw---- 1 root root 1.5G 2011-01-10 06:30 myserver-opt.20110110.master.tar.gz -rw------- 1 root root 201K 2011-01-10 06:30 myserver-opt.incremental.bin -rw-rw---- 1 root root 2.3G 2011-01-03 06:41 myserver-srv.20110103.master.tar.gz -rw-rw---- 1 root root 44M 2011-01-06 06:26 myserver-srv.20110106.tar.gz -rw-rw---- 1 root root 27M 2011-01-07 06:25 myserver-srv.20110107.tar.gz -rw-rw---- 1 root root 39M 2011-01-08 06:26 myserver-srv.20110108.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-09 06:25 myserver-srv.20110109.tar.gz -rw-rw---- 1 root root 2.7G 2011-01-10 06:42 myserver-srv.20110110.master.tar.gz -rw------- 1 root root 3.4M 2011-01-10 06:42 myserver-srv.incremental.bin I'm thinking about moving backups to Amazon S3, but before that I have to free some space, so the server can work. Perhaps I can mount /var/archives to an Amazon S3 bucket somehow... Any advice?

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • How to replicate windows server 2008?

    - by Sem Dendoncker
    Hello, We have 2 servers let's call them server A and server B. Server B is never used unless something goes wrong with server A. I need a system that can replicate server A to server B but this doesn't have to be continues. This only has to be done once every day (let's say at 1 am or so). Also, it's not necessary to automatically take over the server, this can be done manually. On this server IIS is running and sql 2008 so all webapplications and databases must be synced/replicated. What tools can I use? I hope someone can help me with this. Cheers, Sem

    Read the article

  • How to merge-copy multiple folders in Outlook?

    - by user553702
    In MS Outlook, I need to be able to incrementally copy items in multiple folders in the Exchange account to a local PST file with a mirrored folder structure. I need the items in each folder to be combined into the destination. For example, let's say on the server account I have a folder tree like this: Inbox SortedEmails1 SortedEmails2 SortedEmails3 I also have these same four folders in the local PST file, which I want to keep growing as I incrementally pull more messages from the Exchange server. Messages from "Inbox" should go to the local "Inbox", messages from "SortedEmails1" should go into "SortedEmails1" in the local PST, etc. I'd like to avoid manually iterating into every single folder and copying items. How can I do this?

    Read the article

  • Setting up a Windows Server 2008 R2 DC + Fileserver : native or virtual?

    - by user126890
    I want to deploy a new DC + Fileserver using Windows Server 2008 R2 SP1 Standard Edition on a Dell PowerEdge R410 and iSCSI storage for a small business (~30 people). Should I install the system native on the server or use a virt layer? I don't have a budget for virtualization so i gotta go with something free... What's a better working routine, taking snapshots of vm's or taking backups (Acronis/CloneZilla) of systems? If I use a virt system, I need a GUI for some people in the business to reset the system to a earlier state in emergency situations. I wanted to install phpVirtualBox once but never finished, is it suitable in a productive environment? server specs: Intel Xeon E5620 CPU (2,40GHz, 4C, 12MB Cache) 8GB RAM Dual Rank LV RDIMMs 1333MHz 2x 1TB SATA 7,2K 3,5, RAID1

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • Best Way to Archive Digital Photos and Avoid Duplicate File Names

    - by user31575
    This problem pertains to archiving of digital pictures taken from multiple cameras. Answers here covered the general topic of the-mechanics-of-backups: How do you archive digital photos and videos ? I however face another problem. Having multiple cameras (canon) and multiple SD cards (mixed and matched at random), I have found that different SD cards have different photos with the same file name, i.e. two different photos each name IMG_3141.JPG. Additionally, for better or worse, I've backed up the files to multiple places and need to consolidate my backups. I want to eliminate duplicates, but not clobber files. The only way I can think of is to append the code (md5 or sha1) to the file name, i.e. IMG_3141.JPG becomes IMG_3141_KT229QZ31415926ASDF.JPG, then sorting them out Any better ways? (Note "open letter" address the 'duplicate file name' concern): http://photofocus.com/2010/09/13/an-open-letter-to-digital-camera-manufacturers-regarding-camera-file-naming/ )

    Read the article

  • Computer won't recognize second WD internal HD

    - by music man
    I installed a Western Digital Caviar Green 1TB internal hard drive, and want to use it for back up. My BIOS recognizes the device, but it is not recognized in Disk Management. It also is not recognized in "my computer." When I right-click my C drivepropertieshardware, I see it listed. First the drive was plugged into SATA port 1 on my motherboard, then I tried it in port 3, where it remains. I've been working on this for hours. Any help would be appreciated. More info Hard drive model #:WD10EADS-65M2B1 Windows 7 Home Premium 64 bit Service Pack 1 Computer:HP Model p6404y, Processor:AMD phenom II x4 820 2.80GHZ 8 gigs RAM

    Read the article

  • Has anyone used products by StorSimple? [closed]

    - by AlamedaDad
    Their products are storage systems similar to NetApp, but their "hook" is they provide automated tiered storage with the third tier being Cloud providers like Amazon and Azure. They have an interesting story in which they provide primary SAN/iSCSI storage to VMWare and Exchange and at the same time do snaphots to the Cloud. This provides a possible DR option if you have a second system at an alternate location. I was impressed, but I've never heard of them so I'm looking for input and advise on the product. Thanks in advance...

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >