VMWare vSphere 5: 4 pNICs for iSCSI vs. 2 pNICs
Posted
by
gravyface
on Server Fault
See other posts from Server Fault
or by gravyface
Published on 2012-12-09T16:38:10Z
Indexed on
2012/12/09
17:05 UTC
Read the original article
Hit count: 228
New SAN for me, never used before: it's an IBM DS3512, dual controller with a quad 1GbE NIC per controller that a client bought and needs help setting up.
Hosts (x2) have 8 pNICs and while I usually reserve 2 pNICs for iSCSI per host (and 2 for VM, 2 for management, 2 for vMotion, staggered across adapters), these extra ports on the SAN have me wondering if storage I/O would be significantly improved with 2 additional NICs per host, or if the limitations of the vmkernel/initiator would prevent the additional multipaths from ever being realized.
I'm not seeing alot of 4 pNIC iSCSI implementations per host; 2 is the de facto standard from what I've read/seen online. I could and probably will do some I/O testing, but just wondering if there's a "wall" that someone else has discovered long ago (i.e. before 10GbE) that makes a 4 NIC iSCSI per host setup somewhat pointless.
Just to clarify: I'm not looking for a how-to, but an explanation (link to paper, VMWare recommendation, benchmark, etc.) as to why 2-NIC configurations are the norm vs. 4-NIC iSCSI configurations. i.e. storage vendor limitations, VMKernel/initiator limitations, etc.
© Server Fault or respective owner