Diving into OpenStack Network Architecture - Part 1

Posted by Ronen Kofman on Oracle Blogs See other posts from Oracle Blogs or by Ronen Kofman
Published on Wed, 28 May 2014 19:56:35 +0000 Indexed on 2014/05/29 3:42 UTC
Read the original article Hit count: 791

Filed under:

Before we begin

OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go.

According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack.

The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment:


This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources.

Open vSwitch (OVS)

In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command:

# ovs-vsctl show

7ec51567-ab42-49e8-906d-b854309c9edf

    Bridge br-int

        Port br-int

            Interface br-int

type: internal

        Port "int-br-eth2"

            Interface "int-br-eth2"

    Bridge "br-eth2"

        Port "br-eth2"

            Interface "br-eth2"

type: internal

        Port "eth2"

            Interface "eth2"

        Port "phy-br-eth2"

            Interface "phy-br-eth2"

ovs_version: "1.11.0"

We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post.

When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched:

# ovs-vsctl show

efd98c87-dc62-422d-8f73-a68c2a14e73d

    Bridge br-int

        Port "int-br-eth2"

            Interface "int-br-eth2"

        Port br-int

            Interface br-int

type: internal

        Port "qvocb64ea96-9f"

tag: 1

            Interface "qvocb64ea96-9f"

    Bridge "br-eth2"

        Port "phy-br-eth2"

            Interface "phy-br-eth2"

        Port "br-eth2"

            Interface "br-eth2"

type: internal

        Port "eth2"

            Interface "eth2"

ovs_version: "1.11.0"

Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has.

Another useful command on OVS is dump-flows for example:

# ovs-ofctl dump-flows br-int

NXST_FLOW reply (xid=0x4):

cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL

cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop

cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL

As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command.

Network Namespaces (netns)

Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns.

Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command:
Defining a new namespace:

# ip netns add my-ns

# ip netns list

my-ns

As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command:

# ip netns exec my-ns ifconfig -a

lo        Link encap:Local Loopback

          LOOPBACK  MTU:16436 Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace.

Connecting the namespace to the outside world:
There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace.

So first let's add a bridge to OVS:

# ovs-vsctl add-br my-bridge

Now let's add a port on the OVS and make it internal:

# ovs-vsctl add-port my-bridge my-port

# ovs-vsctl set Interface my-port type=internal

And let's connect it into the namespace:

# ip link set my-port netns my-ns

Looking inside the namespace:

# ip netns exec my-ns ifconfig -a

lo        Link encap:Local Loopback

          LOOPBACK  MTU:65536 Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21

          BROADCAST  MTU:1500 Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces.

Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details.

Linux Bridge and veth pairs

Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges.

Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface.

In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node:
Creating a veth pair, note that we define names for both ends:

# ip link add veth0 type veth peer name veth1

# ifconfig -a

.

.

veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17

          BROADCAST MULTICAST  MTU:1500 Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8

          BROADCAST MULTICAST  MTU:1500 Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

.

.

To make the example more meaningful this we will create the following setup:

veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server

br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3

eth3 – a physical interface with no IP on it, connected to a private network

eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1

Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up:

# brctl addbr br-eth3

# brctl addif br-eth3 eth3

# brctl addif br-eth3 veth1

# brctl show

bridge name     bridge id               STP enabled     interfaces

br-eth3         8000.00505682e7f6       no              eth3

                                                        veth1

# ifconfig veth0 50.50.50.50

# ping -I veth0 50.50.50.51

PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data.

64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms

64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms

When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example:

# ethtool -S veth1

NIC statistics:

peer_ifindex: 12

# ip link

.

.

12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

Summary

That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world.

@RonenKofman

© Oracle Blogs or respective owner

Related posts about /OpenStack