Clustering for Mere Mortals (Pt3)
- by Geoff N. Hiten
The Controller
Now we get to the meat of the matter. You want a virtual cluster, the first thing you have to do is create your own portable domain. IStart with a plain vanilla install of Windows 2003 R2 Standard on a semi-default VM. (1 GB RAM, 2 cores, 2 NICs, 128GB dynamically expanding VHD file). I chose this because it had the smallest disk and memory footprint of any current supported Microsoft Server product. I created the VM with a single dynamically expanding VHD, one fixed 16 GB VHD, and two NICs. One NIC is connected to the outside world and the other one is part of an internal-only network. The first NIC is set up as a DHCP client. We will get to the other one later.
I actually tried this with Windows 2008 R2, but it failed miserably. Not sure whether it was 2008 R2 or the fact I tried to use cloned VMs in the cluster. Clustering is one place where NewSID would really come in handy. Too bad Microsoft bought and buried it.
Load and Patch the OS (hence the need for the outside connection).This is a good time to go get dinner. Maybe a movie too. There are close to a hundred patches that need to be downloaded and applied. Avoiding that mess was why I put so much time into trying to get the 2008 R2 version working. Maybe next time. Don’t forget to add the extensions for VMLite (or whatever virtualization product you prefer).
Set a fixed IP address on the internal-only NIC. Do not give it a gateway. Put the same IP address for the NIC and for the DNS Server. This IP should be in a range that is never available on your public network. You will need all the addresses in the range available. See the previous post for the exact settings I used.
I chose 10.97.230.1 as the server. The rest of the 10.97.230 range is what I will use later. For the curious, those numbers are based on elements of my home address. Not truly random, but good enough for this project.
Do not bridge the network connections. I never allowed the cluster nodes direct access to any public network.
Format the fixed VHD and leave it alone for now.
Promote the VM to a Domain Controller. If you have never done this, don’t worry. The only meaningful decision is what to call the new domain. I prefer a bogus name that does not correspond to a real Top-Level Domain (TLD). .com, .biz., .net, .org are all TLDs that we know and love. I chose .test as the TLD since it is descriptive AND it does not exist in the real world. The domain is called MicroAD. This gives me MicroAD.Test as my domain.
During the promotion process, you will be prompted to install DNS as part of the Domain creation process. You want to accept this option. The installer will automatically assign this DNS server as the authoritative owner of the MicroAD.test DNS domain (not to be confused with the MicroAD.test Active Directory domain.)
For the rest of the DCPROMO process, just accept the defaults.
Now let’s make our IP address management easy. Add the DHCP Role to the server. Add the server (10.97.230.1 in this case) as the default gateway to assign to DHCP clients. Here is where you have to be VERY careful and bind it ONLY to the Internal NIC. Trust me, your network admin will NOT like an extra DHCP server “helping” out on her network. Go ahead and create a range of 10-20 IP Addresses in your scope. You might find other uses for a pocket domain controller <cough> Mirroring </cough> than just for building a cluster. And Clustering in SQL 2008 and Windows 2008 R2 fully supports DHCP addresses.
Now we have three of the five key roles ready. Two more to go.
Next comes file sharing. Since your cluster node VMs will not have access to any outside, you have to have some way to get files into these VMs. I simply go to the root of C: and create a “Shared” folder. I then share it out and grant full control to “Everyone” to both the share and to the underlying NTFS folder. This will be immensely useful for Service Packs, demo databases, and any other software that isn’t packaged as an ISO that we can mount to the VM.
Finally we need to create a block-level multi-connect storage device. The kind folks at Starwinds Software (http://www.starwindsoftware.com/) graciously gave me a non-expiring demo license for expressly this purpose. Their iSCSI SAN software lets you create an iSCSI target from nearly any storage medium. Refreshingly, their product does exactly what they say it does. Thanks.
Remember that 16 GB VHD file? That is where we are going to carve into our LUNs. I created an iSCSI folder off the root, just so I can keep everything organized. I then carved 5 ea. 2 GB iSCSI targets from that folder. I chose a fixed VHD for performance. I tried this earlier with a dynamically expanding VHD, but too many layers of abstraction and sparseness combined to make it unusable even for a demo. Stick with a fixed VHD so there is a one-to-one mapping between abstract and physical storage. If you read the previous post, you know what I named these iSCSI LUNs and why.
Yes, I do have some left over space. Always leave yourself room for future growth or options.
This gets us up to where we can actually build the nodes and install SQL. As with most clusters, the real work happens long before the individual nodes get installed and configured. At least it does if you want the cluster to be a true high-availability platform.