Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet
- by matt
We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well.
I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent?
1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet.
This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution.
2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image.
How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot?
I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem?
All the servers have SSDs in them and are fast and powerful.
Thanks,
matt.