I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system.
Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application?
How do I ensure data redundancy through different storage sizes?
Is there any algorithm to collate multiple blob files into a single file archive?
What is the best solution for one cloud storage with multiple different storage sizes?
I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.