Search Results

Search found 130 results on 6 pages for 'libvirt'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • KVM/Qemu Guest Shutdown problems

    - by Gaia
    on both host and guest running CentOS 6.3 with KVM/Qemu virtualization, I have the following scenarios: "virsh shutdown kvm1" did not shutdown at all. virsh lists guest as running. "service libvirt-guests stop" did not shutdown in 280 seconds (shutdown_timeout=300. on_shutdown=shutdown) "shutdown now" from within guest, guest becomes unreachable. virsh lists guest as running, though it could not connect to it. "shutdown -h now" from within guest works. "shutdown -r now" from within guest works. Libvirt logs show nothing for the first 3 scenarios. I can pause the guest fine. Bottom line, I cannot shutdown from outside the guest. What do I check to figure out what is going on?

    Read the article

  • redirecting arbitrary tcp/udp in kvm

    - by jbfink
    I've got a server with KVM on it, and multiple guest VMs. I'd like a way to redirect traffic from the host server to the VMs. Like, say, forward all traffic on port 2222 on the host to 22 on a guest VM for ssh. This would have to be done either through virt-manager or libvirt XML config files -- I've found multiple references to doing it through qemu (like http://forums.fedoraforum.org/showthread.php?t=237969) but absolutely nothing that I can see related to either libvirt or virt-manager. Do you know how I can do this?

    Read the article

  • how to make bridge networking with KVM work in Fedora19

    - by netllama
    I'm attempting to get several virtual machines setup on a Fedora-19 host system, with the traditional bridge network devices (br0, br1, etc). I've done this many times before with older versions of Fedora (16, 14, etc), and it just works. However, for reasons that I cannot figure out, the bridge doesn't seem to be working in Fedora19. While I can successfully connect to the outside world (local network + internet) from inside a VM, nothing can communicate with the VM from outside (local network). I'm referring to something as trivial as pinging. From inside the VM, I can ping anything successfully (0% packet loss). However, from outside the VM (on the host, or any other system on the same network), I see 100% packet loss when pinging the IP address of the VM. My first question is simply, does anyone else have this working successfully in F19? And if so, what steps did you need to follow? I'm not using NetworkManager at all, its all the network service. There are no firewalls involved anywhere (iptables & firewall services are currently disabled). Here's the current host configuration: # brctl show bridge name bridge id STP enabled interfaces br0 8000.38eaa792efe5 no em2 vnet1 br1 8000.38eaa792efe6 no em3 br2 8000.38eaa792efe7 no em4 vnet0 virbr0 8000.525400db3ebf yes virbr0-nic # more /etc/sysconfig/network-scripts/ifcfg-em2 TYPE=Ethernet BRIDGE="br0" NAME=em2 DEVICE="em2" UUID=aeaa839e-c89c-4d6e-9daa-79b6a1b919bd ONBOOT=yes HWADDR=38:EA:A7:92:EF:E5 NM_CONTROLLED="no" # more /etc/sysconfig/network-scripts/ifcfg-br0 TYPE=Bridge NM_CONTROLLED="no" BOOTPROTO=dhcp NAME=br0 DEVICE="br0" ONBOOT=yes # ifconfig em2 ;ifconfig br0 em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 1000 (Ethernet) RX packets 100093 bytes 52354831 (49.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25321 bytes 15791341 (15.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d00000-f7e00000 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.31.99.226 netmask 255.255.252.0 broadcast 10.31.99.255 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 0 (Ethernet) RX packets 19619 bytes 1963328 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 1074 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Relevant section from /etc/libvirt/qemu/foo.xml (one of the VMs with this problem): <interface type='bridge'> <mac address='52:54:00:26:22:9d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> # ps -ef | grep qemu qemu 1491 1 82 13:25 ? 00:42:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name cuda-linux64-build5 -S -machine pc-0.13,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 16384 -smp 6,sockets=6,cores=1,threads=1 -uuid 6e930234-bdfd-044d-2787-22d4bbbe30b1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/cuda-linux64-build5.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/cuda-linux64-build5.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:26:22:9d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 I can provide additional information, if requested. thanks!

    Read the article

  • debootstrap or virt-install Ubuntu Server Maverick fails

    - by poelinca
    Oki so running any kind of variation of debootsrap i get the following error I: Extracting zlib1g... W: Failure trying to run: chroot /lxc/iso/dodo mount -t proc proc /proc debootstrap.log : mount: permission denied if i manualy chroot into the directory then i get promted with: id: cannot find name for group ID 0 I have no name!@...# i tryed addgroup but it's not installed , apt-get/aptitude : command not found , so i can't do anything with it . I've tryed ubuntu-vm-builder but since it's calling debootstrap i get the same error . Played with it for a few days and then i stoped and gaved virt-install a try , everithing works till i get to the console to finish the install witch shows only : Escape character is ^] and nothing more , no matter what i type . So basicly what i'm trying to do is build a usable chroot system so i can use it with lxc or libvirt . What are my options to get containers/virtualisation up and running ? I've read somewhere that i can use openvz templates with lxc or libvirt ? but how ? Let me know if you need aditional info ( p.s. doing all this on a dedicated server so i can't access it by hand , only ssh , plus on my local pc running ubuntu desktop maverick everithing works ) . EDIT Getting closer , i managed to understand how to use an openvz template with lxc , now the problem comes with the network bridge lxc-start: invalid interface name: br0 # Use same bridge device used in your controlling host setup lxc-start: failed to process 'lxc.network.link = br0 # Use same bridge device used in your controlling host setup ' lxc-start: failed to read configuration file i followed the exact steps to create a bridge and lxc conf looks like: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 # Use same bridge device used in your controlling host setup lxc.network.hwaddr = {a1:b2:c3:d4:e5:f6} # As appropiate (line only needed if you wish to dhcp later) lxc.network.ipv4 = {10.0.0.100} # (Use 0.0.0.0 if you wish to dhcp later) lxc.network.name = eth0 # could likely be whatever you want Since it's not working i know smth is wrong so could somebody guyde me ? EDIT , looks like the base install was using an custom kernel ( bzImage-2.6.34.6-xxxx-grs-ipv6-65 ) for witch you i didn't found the headers , i did a update-grub after i installed a new kernel , edited menu.lst and no it's using 2.6.35-23-server and now debootstrap is working just fine same as ubuntu-vm-builder .

    Read the article

  • Why `initctl status tty1` gets `Unknown job: tty1`

    - by UniMouS
    # initctl status tty1 initctl: Unknown job: tty1 But there is a tty1.conf file in /etc/init and I haven't modified it: # cat /etc/init/tty1.conf # tty1 - getty # # This service maintains a getty on tty1 from the point the system is # started until it is shut down again. start on stopped rc RUNLEVEL=[2345] and ( not-container or container CONTAINER=lxc or container CONTAINER=lxc-libvirt) stop on runlevel [!2345] respawn exec /sbin/getty -8 38400 tty1

    Read the article

  • Juju Openstack bundle: Can't launch an instance

    - by user281985
    Deployed bundle:~makyo/openstack/2/openstack, on top of 7 physical boxes and 3 virtual ones. After changing vip_iface strings to point to right devices, e.g., br0 instead of eth0, and defining "/mnt/loopback|30G", in Cinder's block-device string, am able to navigate through openstack dashboard, error free. Following http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/running-an-instance.html instructions, attempted to launch cirros 0.3.1 image; however, novalist shows the instance in error state. ubuntu@node7:~$ nova --debug boot --flavor 1 --image 28bed1bc-bc1c-4533-beee-8e0428ad40dd --key_name key2 --security_group default cirros REQ: curl -i http://keyStone.IP:5000/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "openstack"}}}' INFO (connectionpool:191) Starting new HTTP connection (1): keyStone.IP DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 None RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:02 GMT', 'transfer-encoding': 'chunked', 'vary': 'X-Auth-Token', 'content-type': 'application/json'} RESP BODY: {"access": {"token": {"expires": "2014-06-11T00:01:02Z", "id": "3eefa1837d984426a633fe09259a1534", "tenant": {"description": "Created by Juju", "enabled": true, "id": "08cff06d13b74492b780d9ceed699239", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239", "publicURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:9696", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:9696", "publicURL": "http://nova.cloud.controller:9696"}], "endpoints_links": [], "type": "network", "name": "quantum"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:3333", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:3333", "publicURL": "http://nova.cloud.controller:3333"}], "endpoints_links": [], "type": "s3", "name": "s3"}, {"endpoints": [{"adminURL": "http://i.p.s.36:9292", "region": "RegionOne", "internalURL": "http://i.p.s.36:9292", "publicURL": "http://i.p.s.36:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239", "region": "RegionOne", "internalURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239", "publicURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:8773/services/Cloud", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:8773/services/Cloud", "publicURL": "http://nova.cloud.controller:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "ec2"}, {"endpoints": [{"adminURL": "http://keyStone.IP:35357/v2.0", "region": "RegionOne", "internalURL": "http://keyStone.IP:5000/v2.0", "publicURL": "http://i.p.s.44:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "b3730a52a32e40f0a9500440d1ef1c7d", "roles": [{"id": "e020001eb9a049f4a16540238ab158aa", "name": "Admin"}, {"id": "b84fbff4d5554d53bbbffdaad66b56cb", "name": "KeystoneServiceAdmin"}, {"id": "129c8b49d42b4f0796109aaef2069aa9", "name": "KeystoneAdmin"}], "name": "admin"}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd HTTP/1.1" 200 719 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:03 GMT', 'x-compute-request-id': 'req-7f3459f8-d3d5-47f1-97a3-8407a4419a69', 'content-type': 'application/json', 'content-length': '719'} RESP BODY: {"image": {"status": "ACTIVE", "updated": "2014-06-09T22:17:54Z", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}, {"href": "http://External.Public.Port:9292/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "OS-EXT-IMG-SIZE:size": 13147648, "name": "Cirros 0.3.1", "created": "2014-06-09T22:17:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 HTTP/1.1" 200 418 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:04 GMT', 'x-compute-request-id': 'req-2c153110-6969-4f3a-b51c-8f1a6ce75bee', 'content-type': 'application/json', 'content-length': '418'} RESP BODY: {"flavor": {"name": "m1.tiny", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 0, "id": "1"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers -X POST -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" -d '{"server": {"name": "cirros", "imageRef": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "key_name": "key2", "flavorRef": "1", "max_count": 1, "min_count": 1, "security_groups": [{"name": "default"}]}}' INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "POST /v1.1/08cff06d13b74492b780d9ceed699239/servers HTTP/1.1" 202 436 RESP: [202] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-41e53086-6454-4efb-bb35-a30dc2c780be', 'content-type': 'application/json', 'location': 'http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43', 'content-length': '436'} RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-DCF:diskConfig": "MANUAL", "id": "2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "bookmark"}], "adminPass": "oFRbvRqif2C8"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 HTTP/1.1" 200 1349 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-d91d0858-7030-469d-8e55-40e05e4d00fd', 'content-type': 'application/json', 'content-length': '1349'} RESP BODY: {"server": {"status": "BUILD", "updated": "2014-06-10T00:01:05Z", "hostId": "", "OS-EXT-SRV-ATTR:host": null, "addresses": {}, "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "bookmark"}], "key_name": "key2", "image": {"id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "links": [{"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}]}, "OS-EXT-STS:task_state": "scheduling", "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "flavor": {"id": "1", "links": [{"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}]}, "id": "2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "security_groups": [{"name": "default"}], "OS-EXT-AZ:availability_zone": "nova", "user_id": "b3730a52a32e40f0a9500440d1ef1c7d", "name": "cirros", "created": "2014-06-10T00:01:04Z", "tenant_id": "08cff06d13b74492b780d9ceed699239", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "config_drive": "", "metadata": {}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 HTTP/1.1" 200 418 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-896c0120-1102-4408-9e09-cd628f2dd699', 'content-type': 'application/json', 'content-length': '418'} RESP BODY: {"flavor": {"name": "m1.tiny", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 0, "id": "1"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd HTTP/1.1" 200 719 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-454e9651-c247-4d31-8049-6b254de050ae', 'content-type': 'application/json', 'content-length': '719'} RESP BODY: {"image": {"status": "ACTIVE", "updated": "2014-06-09T22:17:54Z", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}, {"href": "http://External.Public.Port:9292/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "OS-EXT-IMG-SIZE:size": 13147648, "name": "Cirros 0.3.1", "created": "2014-06-09T22:17:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}} +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-EXT-STS:task_state | scheduling | | image | Cirros 0.3.1 | | OS-EXT-STS:vm_state | building | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | flavor | m1.tiny | | id | 2eb5e3ad-3044-41c1-bbb7-10f398f83e43 | | security_groups | [{u'name': u'default'}] | | user_id | b3730a52a32e40f0a9500440d1ef1c7d | | OS-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 0 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | | status | BUILD | | updated | 2014-06-10T00:01:05Z | | hostId | | | OS-EXT-SRV-ATTR:host | None | | key_name | key2 | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | name | cirros | | adminPass | oFRbvRqif2C8 | | tenant_id | 08cff06d13b74492b780d9ceed699239 | | created | 2014-06-10T00:01:04Z | | metadata | {} | +-------------------------------------+--------------------------------------+ ubuntu@node7:~$ ubuntu@node7:~$ nova list +--------------------------------------+--------+--------+----------+ | ID | Name | Status | Networks | +--------------------------------------+--------+--------+----------+ | 2eb5e3ad-3044-41c1-bbb7-10f398f83e43 | cirros | ERROR | | +--------------------------------------+--------+--------+----------+ ubuntu@node7:~$ var/log/nova/nova-compute.log shows the following error: ... 2014-06-10 00:01:06.048 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Attempting claim: memory 512 MB, disk 0 GB, VCPUs 1 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total Memory: 3885 MB, used: 512 MB 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Memory limit: 5827 MB, free: 5315 MB 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total Disk: 146 GB, used: 0 GB 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Disk limit not specified, defaulting to unlimited 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total CPU: 2 VCPUs, used: 0 VCPUs 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] CPU limit not specified, defaulting to unlimited 2014-06-10 00:01:06.051 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Claim successful 2014-06-10 00:01:06.963 WARNING nova.network.quantumv2.api [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] No network configured! 2014-06-10 00:01:08.347 ERROR nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Instance failed to spawn 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Traceback (most recent call last): 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1118, in _spawn 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] self._legacy_nw_info(network_info), 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 703, in _legacy_nw_info 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] network_info = network_info.legacy() 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] AttributeError: 'list' object has no attribute 'legacy' 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] 2014-06-10 00:01:08.919 AUDIT nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Terminating instance 2014-06-10 00:01:09.712 32223 ERROR nova.virt.libvirt.driver [-] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] During wait destroy, instance disappeared. 2014-06-10 00:01:09.718 INFO nova.virt.libvirt.firewall [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Attempted to unfilter instance which is not filtered 2014-06-10 00:01:09.719 INFO nova.virt.libvirt.driver [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Deleting instance files /var/lib/nova/instances/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 2014-06-10 00:01:10.044 ERROR nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 864, in _run_instance\n set_access_ip=set_access_ip)\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1123, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', ' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1118, in _spawn\n self._legacy_nw_info(network_info),\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 703, in _legacy_nw_info\n network_info = network_info.legacy()\n', "AttributeError: 'list' object has no attribute 'legacy'\n"] 2014-06-10 00:01:40.951 32223 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-06-10 00:01:41.072 32223 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2861 2014-06-10 00:01:41.072 32223 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 146 2014-06-10 00:01:41.073 32223 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1 2014-06-10 00:01:41.262 32223 INFO nova.compute.resource_tracker [-] Compute_service record updated for node5:node5.maas ... Can't seem to find any entries in quantum.conf related to "legacy". Any help would be appreciated. Cheers,

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Windows 7 fails to install on KVM with qemu

    - by kief_morris
    I'm trying to install Windows 7 on a virtual machine on my 64 bit Ubuntu Karmic box. I get to the point of selecting my language settings and clicking 'install now', but a short while later I get a blue screen of death. I've tried a few variations, including using the 32 bit (fails very quickly). The virt-install command I've tried includes this: sudo virt-install --connect qemu:///system -n ksm-win7 -r 2048 \ --disk path=/home/kief/VM-Images/ksm-win7.qcow2,size=50 \ -c /var/Software/Windows7/Full/64bit/SW_DVD5_SA_Win_Ent_7_64BIT_English_Full_MLF_X15-70749.ISO \ --vnc --os-type windows --os-variant vista --hvm The limited info I could find suggested that 'vista' should work as the --os-variant, I haven't found any values specific to windows 7. Here's my blue screen: I've found very little by Googling, so I'm guessing this isn't a case of KVM simply not supporting Windows 7. Thanks for any help. Update: I have been able to successfully create a Windows 7 VM using the graphical "Virtual Machine Manager" app, although I don't really understand the cause of the problem with the VM created with virt-install. Comparing the configuration files under /etc/libvirt/qemu provides some clues, although I don't know enough to interpret them properly. The interesting differences in the two VM configurations are: --- win7-virt-install.xml +++ win7-vmm.xml -<domain type='qemu'> +<domain type='kvm'> @@ -21 +21 @@ - <emulator>/usr/bin/qemu-system-x86_64</emulator> + <emulator>/usr/bin/kvm</emulator> @@ -23 +23 @@ - <source file='/home/kief/VM-Images/ksm-win7.qcow2'/> + <source file='/var/lib/libvirt/images/ksm-win7x64.img'/> I'm not sure if this means the working VM is not using qemu at all, or if there is some other difference in the way it's used with kvm. Update2: So I've answered my own question (mostly) below. A KVM VM needs to use KVM's own CPU emulation rather than qemu's in order for me to get Windows 7 installed. I'm not sure whether there is something that can be done to get it working on a qemu-emulation CPU, or whether a newer version will support it. But at least it is possible to get it running on a KVM VM.

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • With CentOS 6 and LXC, "ifconfig" is unable to see network interface (but busybox "ifconfig" works fine)

    - by larsks
    I've just started working with LXC under CentOS 6 (via the libvirt adapter). If I create an LXC container, I'm unable to see any network interfaces when using the native system tools: # ifconfig -a # The behavior is very odd; specifying an interface by names yields neither the expected output nor an error message. This is true even for clearly invalid interface names, like this: # ifconfig foo # The ip command exhibits the same behavior. On the other hand, if I use "ifconfig" provided by busybox, everything works as expected: # busybox ifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:268 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17814 (17.3 KiB) TX bytes:552 (552.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So...what does busybox know that the native tools don't? The libvirt config for this environment is pretty standard; the network definition looks like this: <interface type='network'> <mac address='52:54:00:e0:12:c8'/> <source network='default'/> <target dev='veth0'/> </interface> The full configuration is here if you think it might help. I'm running: lxc-0.7.2-2.el6.x86_64 kernel-2.6.32-71.29.1.el6.x86_64 EDIT Weirder and weirder...it's a display issue, not a functionality issue. I can see the output of ifconfig if I pipe it into anything, so for example: # ifconfig eth0 | cat eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:573 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37914 (37.0 KiB) TX bytes:552 (552.0 b) And in fact even when not piping the output, strace shows that ifconfig is in fact writing the output to file descriptor 1 (aka stdout), so it's not clear why no output is actually showing up. This could be either an LXC or a virsh issue, I guess.

    Read the article

  • installing kvm on centos 6.2 getting error: virsh sysinfo error: failed to get sysinfo

    - by Grant
    # virsh sysinfo error: failed to get sysinfo error: unsupported configuration: Host SMBIOS information is not available # virsh -c qemu:///system sysinfo error: failed to get sysinfo error: unsupported configuration: Host SMBIOS information is not available Following this tutorial to the letter: http://eduardo-lago.blogspot.com/2012/01/how-to-install-kvm-and-libvirt-on.html Everything else works fine except this command: virsh sysinfo outputs error. Help!

    Read the article

  • kvm and qemu host: Is there a limit for max CPUs (Ubuntu 10.04)?

    - by Valentin
    Today we encountered a really strange behaviour on two identical kvm and qemu hosts. The host systems each have 4 x 10 Cores, which means that 40 physical cores are displayed as 80 within the operating system (Ubuntu Linux 10.04 64 Bit). We started a Windows 2003 32 Bit VM (1 CPU, 1 GB RAM, we changed those values multiple times) on one of the nodes and noticed that it took 15 minutes until the boot process began. During those 15 minutes, a black screen is shown and nothing happens. libvirt and the host system show that the qemu-kvm process for the guest is almost idling. stracing this process only shows some FUTEX entries, but nothing special. After those 15 minutes, the Windows VM suddenly starts booting and the Windows logo occurs. After a few seconds, the VM is ready to be used. The VM itself is very performant, so this is no performance issue. We tried to pin the CPUs with the virsh and taskset tools, but this only made things worse. When we boot the Windows VM with a Linux Live CD there is also a black screen for several minutes, but not as long as 15. When booting another VM on this host (Ubuntu 10.04) it also has the black screen problem, and also here the black screen is only shown for 2-3 minutes (instead of 15). So, summerinzing this: Each guest on each of those identical nodes suffers from idling a few minutes after being started. After a few minutes, the boot process suddenly starts. We have observed that the idling time happens right after the bios of the guest was initialized. One of our employees had the idea to limit the amount of CPUs with maxcpus=40 (because of 40 physical cores existing) within Grub (kernel parameter) and suddenly the "black-screen-idling"-behaviour disappeared. Searching the KVM and Qemu mailing lists, the internet, forums, serverfault and other various sites for known bugs etc. showed no useful results. Even asking in the dev IRC channels brought no new ideas. The people there recommend us to use CPU pinning, but as stated before it didn't help. My question is now: Is there a sort of limit of CPUs for a qemu or kvm host system? Browsing the source code of those two tools showed that KVM would send a warning if your host has more than 255 CPUs. But we are not even scratching on that limit. Some stuff about the host system: 3.0.0-20-server kvm 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 kvm-pxe 5.4.4-7ubuntu2 qemu-kvm 0.14.0+noroms-0ubuntu4 qemu-common 0.14.0+noroms-0ubuntu4 libvirt 0.8.8-1ubuntu6 4 x Intel(R) Xeon(R) CPU E7-4870 @ 2.40GHz, 10 Cores

    Read the article

  • /etc/init.d/libvirtd start fails but service libvirtd start works. Why?

    - by Gregg
    CentOS 6.3, running as root (Shush). Can you please tell me why I would get initialisation failures from the init scripts but the service command works a treat? There was nothing in /var/log/messages or /var/log/libvirt/* all I have it the Terminal output: /etc/init.d/libvirtd start Starting libvirtd daemon: libvirtd: initialization failed [FAILED] I changed the libvirtd logging level to 1, the highest, but saw nothing in messages after another failure.

    Read the article

  • error in qemu monitor wavcapture with virsh

    - by Aniket Awati
    I have VM running on qemu-kvm. I am managing it with libvirt and command line tool virsh. I want to record the audio output of the VM. Here is what I am trying - virsh qemu-monitor-command -hmp VM_NAME wavcapture VM.wav This is the output I am getting : Failed to open wave file `vm.wav' Reason: Permission denied Failed to add wave capture I have tried to create a dummy vm.wav with 777 permissions. But I still get the same error.

    Read the article

  • How to decrease size of KVM virtual machine disk image

    - by Cerin
    How do you decrease or shrink the size of a KVM virtual machine disk? I allocated a virtual disk of 500GB (stored at /var/lib/libvirt/images/vm1.img), and I'm finding that overkill, so now I'd like to free up some of that space for use with other virtual machines. There seems to be a lot answers on how to increase image storage, but not decrease it. I found the virt-resize tool, but it only seems to work with raw disk partitions, not disk images.

    Read the article

  • ubuntu 13.10 kvm binary is deprecated, please use qemu-system-x86_64

    - by ??1986
    I just upgrade from 13.04 to 13.10 and I have this issue when I run my KVM Unable to complete install: 'internal error: process exited while connecting to monitor: W: kvm binary is deprecated, please use qemu-system-x86_64 instead char device redirected to /dev/pts/10 (label charserial0) failed to initialize KVM: Device or resource busy Detail Error: Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1983, in do_install guest.start_install(False, meter=meter) File "/usr/lib/python2.7/dist-packages/virtinst/Guest.py", line 1246, in start_install noboot) File "/usr/lib/python2.7/dist-packages/virtinst/Guest.py", line 1314, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 2892, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: internal error: process exited while connecting to monitor: W: kvm binary is deprecated, please use qemu-system-x86_64 instead char device redirected to /dev/pts/8 (label charserial0) failed to initialize KVM: Device or resource busy

    Read the article

  • vmbuilder fails on chroot

    - by Bruce
    I am trying to install virtual machine with this command, but have no success: vmbuilder kvm ubuntu --verbose --suite precise --flavour virtual \ --part partitions.txt --ip 192.168.1.3 --hostname edb1 --arch amd64 \ -o --libvirt qemu:///system --user someuser --pass somepass \ --raw /home/virtual-machines/edb1.disk1.img \ --raw /home/virtual-machines/edb1.disk2.img \ --domain somedomain.com --mem 4096 --cpus 4 This is the error: ... I: Extracting xz-utils... I: Extracting zlib1g... W: Failure trying to run: chroot /tmp/tmp_JdKzu mount -t proc proc /proc , stderr: The host kernel is not original but modified by server provider. Why is the chroot needed for installation?

    Read the article

  • Creating a user called 'root'

    - by pnp
    I am creating Virtual Machines using the ubuntu-vm-builder. The syntax goes something like this: ubuntu-vm-builder kvm precise \ --domain newvm \ --dest newvm \ --arch i386 \ --hostname hostnameformyvm \ --mem 256 \ --user john \ --pass doe \ --ip 192.168.0.12 \ --mask 255.255.255.0 \ --net 192.168.0.0 \ --bcast 192.168.0.255 \ --gw 192.168.0.1 \ --dns 192.168.0.1 \ --mirror http://archive.localubuntumirror.net/ubuntu \ --components main,universe \ --addpkg acpid \ --addpkg vim \ --addpkg openssh-server \ --addpkg avahi-daemon \ --libvirt qemu:///system ; I need to enable the 'root' user account after creating each of my VMs (and supply a password for it). I was just wondering whether I can take this short-cut of supplying the username (--user) as root in this command itself. If I supply username as root to create my VMs, am I creating/enabling the root user, or just creating a user named as root? p.s.: any better ways to achieve my task are also welcome. But I don't want to manually meddle with each VM after its creation

    Read the article

  • ubuntu 10.04; kvm bridged networking not working with public ip addresses

    - by senorsmile
    I have a dedicated hosted server box with ubuntu 10.04 64 bit installed. I would like to run kvm with ubuntu 8.04 installed for some php 5.2 compatible apps(they don't work right with php 5.3, the default in ubuntu 10.04). I installed KVM as instructed at https://help.ubuntu.com/community/KVM/Installation . I installed the vm using virt-manager. I never could figure out how use virt-install or any of those automated installers. I just installed it using the disc. I set up bridged networking as per https://help.ubuntu.com/community/KVM/Networking . However, the bridged connection doesn't work. Here's my /etc/network/interfaces on the host, running ubuntu 10.04. (with specific public ip blanked) auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address xx.xx.xx.xx netmask 255.255.255.248 gateway xx.xx.xx.xa bridge_ports eth0 bridge_stp on bridge_fd 0 bridge_maxwait 10 ` Here's my /etc/network/interfaces on the guest, running ubuntu 8.04. auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xy netmask 255.255.255.248 gateway xx.xx.xx.xa The two vm's can communicate to each other. But, the guest vm can't access anyone in the real world. Here's my /etc/libvirt/qemu/store_804.xml <domain type='kvm'> <name>store_804</name> <uuid>27acfb75-4f90-a34c-9a0b-70a6927ae84c</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/store_804.img'/> <target dev='hda' bus='ide'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> <interface type='bridge'> <mac address='52:54:00:26:0b:c6'/> <source bridge='br0'/> <model type='virtio'/> </interface> <console type='pty'> <target port='0'/> </console> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='es1370'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> Any idea where I've gone wrong?

    Read the article

  • slow network in centos5 VM with centos5 host running KVM

    - by dan
    I setup KVM following the guide here: http://www.cyberciti.biz/faq/centos-rhel-linux-kvm-virtulization-tutorial/ I setup a bridged network and it worked fine except that the transfer speed is 200KB/s instead of the gigabit speed that I get on the host machine by itself. I tried editing the guest network settings to set "model=virtio" http://wiki.libvirt.org/page/Virtio but this just moves ifconfig-eth0 to ifconfig-eth0.bak in the VM and networking doesn't work at all. I tried moving ifconfig-eth0 back and starting up eth0, which works, but now the transfer speed is ~ 60KB/s I have no idea what else to try. Any suggestions would be greatly appreciated.

    Read the article

  • slow network in centos5 VM with centos5 host running KVM

    - by dan
    I setup KVM following the guide here: http://www.cyberciti.biz/faq/centos-rhel-linux-kvm-virtulization-tutorial/ I setup a bridged network and it worked fine except that the transfer speed is 200KB/s instead of the gigabit speed that I get on the host machine by itself. I tried editing the guest network settings to set "model=virtio" http://wiki.libvirt.org/page/Virtio but this just moves ifconfig-eth0 to ifconfig-eth0.bak in the VM and networking doesn't work at all. I tried moving ifconfig-eth0 back and starting up eth0, which works, but now the transfer speed is ~ 60KB/s I have no idea what else to try. Any suggestions would be greatly appreciated.

    Read the article

  • KVM machine does not start ssh, network is started, used to work

    - by lleto
    have been searching an pulling my hear out for the last 6 hours. I have a virtual machine that has been running fine for the last six months. I was happy ssh'ing into it and it was running a database and some small apps. Tonight ssh stopped working, so I decided to reboot the machine. I now have the following situation: virsh list --all states machine as running I can ping the machine and get a reply When I ssh to the machine I see "ssh: connect to host [myserver] port 22: Connection refused" nmap does not show port 22 as open I have tried to: - reboot the machine once more (no luck) - mount the filesystem and check /etc/ssh/sshd.conf (has not changed since working situation) - install virsh console, however this does not seem to work When I mount the fs directly using losetup the strange thing is that file dates seem to be frozen in /var/log/ around the time of the crash. If I look in /var/run/ I can see an sshd.pid, but the time is 6 hours ago (and numerous reboots). My virsh xml looks like this: <domain type='kvm' id='21'> <name>myserver</name> <uuid>09678c8d-a99b-1d18-a7af-88d027cc8f93</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/disk01/myserver'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:13:86'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</label> <imagelabel>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</imagelabel> </seclabel> </domain> I'm sort of lost as to where I can look to get the machine up and running again. On the same instance of kvm I have another server running which is working fine. Both are Ubuntu 12.04. All help is welcome....

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • Virtual machine on ubuntu

    - by MITHIYA MOIZ
    I have configured virtual machine on ubuntu with the help of below article, https://help.ubuntu.com/9.04/serverguide/C/libvirt.html I managed to finish all the part except the major portion getting virtual host to talk to real network, Which I guess should be done only via bridge interface. Via virtual machine manager I try to choose any interface it gives me interface not bridged When I try to bridge the interceface eth0 as below auto br0 iface br0 inet static address 192.168.0.223 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off I cannot communicate with this interface to network, host server looses all the communication to network. But when I remote bridge interface from /etc/network/interfaces And configure eth0 as below it works fine The primary network interface auto eth0 iface eth0 inet static address 192.168.0.223 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 dns-nameservers 62.215.6.51 gateway 192.168.0.1 how can i setup bridge interface correctly and how would my /etc/netwrok/interfaces file would look a like.

    Read the article

  • Force an LXC container to use its own IP address

    - by emma sculateur
    Sorry if this question has already been asked. I could not find it, I have this setup : +---------------------------------------------------------------------------------------------+ |HOST | | | | +-------------------------------------------------+ | | | UBUNTU-VM | | | | | | | | +-------------------+ | | | | |UBUNTU-LXC | | +------------------+ | | | | 10.0.0.3/24 | 10.0.0.1/24 | |OTHER VM | | | | | eth0-----lxcbr0----------eth0-----------br0----------eth0 | | | | | | 192.168.100.2/24| 192.168.100.1/24 |192.168.100.3/24 | | | | +-------------------+ | +------------------+ | | +-------------------------------------------------+ | +---------------------------------------------------------------------------------------------+ When I ping 192.168.100.3 from my UBUNTU-LXC, the source IP address is automatically changed to 192.168.100.2 by UBUNTU-VM. It's like having a NAT, whereas I really want my UBUNTU-LXC to talk with it own IP address. Is there any way to do this ? Edit : these info may be relevant : I am using KVM +libvirt to set up my VMs Here is how I create my interface in UBUNTU-VM : <interface type='bridge'> <mac address='52:54:00:cb:aa:74'/> <source bridge='br0'/> <model type='e1000'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </interface>

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >