Best NIC config when virtual servers need iSCSI storage?
- by icky2000
I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this:
NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc)
NIC03: connected to iSCSI VLAN #1
NIC04: connected to iSCSI VLAN #2
NIC05: dedicated to one virtual switch for VMs
NIC06: dedicated to another virtual switch for VMs
The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host.
Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options:
Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs
Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host.
I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad.
Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?