how to make bridge networking with KVM work in Fedora19

Posted by netllama on Server Fault See other posts from Server Fault or by netllama
Published on 2013-08-06T21:20:38Z Indexed on 2014/06/09 3:29 UTC
Read the original article Hit count: 516

I'm attempting to get several virtual machines setup on a Fedora-19 host system, with the traditional bridge network devices (br0, br1, etc). I've done this many times before with older versions of Fedora (16, 14, etc), and it just works. However, for reasons that I cannot figure out, the bridge doesn't seem to be working in Fedora19. While I can successfully connect to the outside world (local network + internet) from inside a VM, nothing can communicate with the VM from outside (local network). I'm referring to something as trivial as pinging. From inside the VM, I can ping anything successfully (0% packet loss). However, from outside the VM (on the host, or any other system on the same network), I see 100% packet loss when pinging the IP address of the VM.

My first question is simply, does anyone else have this working successfully in F19? And if so, what steps did you need to follow?

I'm not using NetworkManager at all, its all the network service. There are no firewalls involved anywhere (iptables & firewall services are currently disabled). Here's the current host configuration:

# brctl show
bridge name bridge id       STP enabled interfaces
br0     8000.38eaa792efe5   no      em2
                            vnet1
br1     8000.38eaa792efe6   no      em3
br2     8000.38eaa792efe7   no      em4
                            vnet0
virbr0      8000.525400db3ebf   yes     virbr0-nic

# more /etc/sysconfig/network-scripts/ifcfg-em2
TYPE=Ethernet
BRIDGE="br0"
NAME=em2
DEVICE="em2"
UUID=aeaa839e-c89c-4d6e-9daa-79b6a1b919bd
ONBOOT=yes
HWADDR=38:EA:A7:92:EF:E5
NM_CONTROLLED="no"

# more /etc/sysconfig/network-scripts/ifcfg-br0
TYPE=Bridge
NM_CONTROLLED="no"
BOOTPROTO=dhcp
NAME=br0
DEVICE="br0"
ONBOOT=yes

# ifconfig em2 ;ifconfig br0
em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::3aea:a7ff:fe92:efe5  prefixlen 64  scopeid 0x20<link>
        ether 38:ea:a7:92:ef:e5  txqueuelen 1000  (Ethernet)
        RX packets 100093  bytes 52354831 (49.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 25321  bytes 15791341 (15.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xf7d00000-f7e00000  

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.31.99.226  netmask 255.255.252.0  broadcast 10.31.99.255
        inet6 fe80::3aea:a7ff:fe92:efe5  prefixlen 64  scopeid 0x20<link>
        ether 38:ea:a7:92:ef:e5  txqueuelen 0  (Ethernet)
        RX packets 19619  bytes 1963328 (1.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 1074 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Relevant section from /etc/libvirt/qemu/foo.xml (one of the VMs with this problem):

<interface type='bridge'>
      <mac address='52:54:00:26:22:9d'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>

# ps -ef | grep qemu
qemu      1491     1 82 13:25 ?        00:42:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name cuda-linux64-build5 -S -machine pc-0.13,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 16384 -smp 6,sockets=6,cores=1,threads=1 -uuid 6e930234-bdfd-044d-2787-22d4bbbe30b1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/cuda-linux64-build5.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/cuda-linux64-build5.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:26:22:9d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

I can provide additional information, if requested. thanks!

© Server Fault or respective owner

Related posts about kvm-virtualization

Related posts about fedora