With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done.
I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget:
1) Go for used infiniband hardware and aim for 10gb performance.
2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs.
I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive.
I know nothing about IB, so here come the questions:
Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples?
Do NFS and iSCSI work just the same regardless?
Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however.
For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too.
Thanks in advance.