I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name.
Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as:
+-------------------------------------+
| Dedicated Server |
| |
| +-------------+ +------------+ | +------------------+
| | DNSMasq | | OpenVPN |<==========>| Client |
| +-------------+ +------------+ | | |
| ^ ^ | +------------------+
| | | |
| +--+ | |
| | +-------+ |
| | | VM1 | |
| | +-------+ |
| | ... |
| | +-------+ |
| +-| VM2 | |
| +-------+ |
+-------------------------------------+
Now some questions which I am struggling with:
Are there any other suggestions to access private infrastructure, because I don't
want to reinvent the wheel.
On the Dedicated Server I don't see the vboxnet0 interface but
VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth?
DNSMasq must serve the local VMs only, otherwise there is a chance that
local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests?
How should I config the VM's network interface that I am able to access it
via OpenVPN and resolve the hostnames using the DNSMasq. I think it
should be the host-only network card.
Should I do bridging in the OpenVPN config or is it sufficient to use
routing.