Solution for file store needing large number of simultaneous connections
Posted
by
Tennyson H
on Server Fault
See other posts from Server Fault
or by Tennyson H
Published on 2014-08-21T02:36:33Z
Indexed on
2014/08/21
4:22 UTC
Read the original article
Hit count: 148
So I'm fairly new to large-scale architectures. We're currently using linode instances for our project, but we're brainstorming about scaling.
We need a file store system than can deliver ~50mb folders (user data) to our computing instances in a reasonable amount of time (<20 sec), and scale to 10000+ total users, and perhaps 100+ simultaneous transfers. We are also unsure whether to network mount (sshfs/nfs) or just do a full transfer store->instance at the beginning and rsync instance-> store at the end.
I've experimented with SSH-FS between our little Linode instances but it seems to be bottlenecked at 15mb/s total bandwith, which wouldn't do under 10+ transfer stress let alone scale v. large. I also tried to investigate NFS but couldn't get it working but have little hope that it'll do within our linode network.
Are there tools on other cloud providers that match our needs? Should we be mounting, or should we be transferring?
Thanks very much!
© Server Fault or respective owner