-
as seen on Server Fault
- Search for 'Server Fault'
I want to test Ceph (a distributed network storage and file system) on some EC2 hosts which is derived from Amazon Linux AMI (amzn-ami-2011.09.2.x86_64-ebs).
The kernel version is 3.2 and btrfs is enabled. But kernel config options related to Ceph (CONFIG_CEPH_FS and CONFIG_BLK_DEV_RBD) seems to…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
I am evaluating GlusterFS and Ceph, seems Gluster is FUSE based which means it may be not as fast as Ceph. But looks like Gluster got a very friendly control panel and is ease to use.
Ceph was merged into linux kernel a few days ago and this indicates that it has much more potential energy and may…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
I am currently looking into POHMELFS because of its ability to scale reads. Does anyone have it in production and could tell me how stable it is?
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
I have a gpfs cluster composed by 10 linux nodes, managed by a primary server A, which also act as nsd server for a first stack of disks.
I attached a new jbod to one of the nodes (call it node B), which I would like to become a nsd server for this new stack of disks, but still be included in the…
>>> More