Search Results

Search found 488 results on 20 pages for 'xen'.

Page 10/20 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Machine check events logged

    - by GoldenNewby
    In /var/log/messages, this error occurred: Sep 19 13:18:15 wdc kernel: [2772302.630416] Machine check events logged Shortly there after, the entire server became unresponsive. This is in the log of the Dom0 for a Xen Server (running the latest version on Debian Squeeze). Can anyone shed some light on what this error means? Should I be ordering new hardware? Edit: Also, it seems to imply it logged something, where can I find that?

    Read the article

  • CentOS with an Ubuntu kernel

    - by Gaia
    The Rackspace cloud server tech tells my CentOS 5.4 VPS (Xen) runs "CentOS with an Ubuntu kernel" Could someone explain, in plain terms, what "CentOS with an Ubuntu kernel" means and if there are any disadvantages (performance, mgmt) between that and running CentOS with a CentOS kernel? Thanks

    Read the article

  • Finding the reason of a force shutdown of a VM

    - by Ricardo Reyes
    We have a linux VM running under XenServer that reboots itself with no apparent reason. Checking the /var/log files in Xen we noticed that it's sending a force shutdown to the VM, like this: messages:Dec 6 15:01:07 XenSrvDell2 BLKTAP-DAEMON[7309]: /local/domain/0/backend/tap/19/51728: got start/shutdown watch on /local/domain/0/backend/tap/19/51728/tapdisk-request What we can't find is the reason why the force-shutdown was initiated. Is there any "higher level log" that might tell us who or why triggered the shutdown?

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • What are optimal strategies for using mapreduce and other applications on the same server?

    - by user45532
    I have two applications that I need to run continuously to process data. 1.) An app that processes and aggregates information from sources 2.) A mapreduce workflow* that processes the above info I've thought about either getting vps hosting or getting my own inexpensive server and using xen to split the resources of the server. Getting a quad core box with 2 GB of Ram seems a lot cheaper than the grid options I've seen at slicehost, rackspace and others...

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • Suggestions for open source testing tool for cloud computing

    - by vikraman
    Hi, I want to know if there is any open source testing tool for cloud computing. We have built a cloud framework with Xen, Eucalyptus, Hadoop, HBase as different layers. I am not looking at testing each of these tools separately, but i want to test them from the perspective of fitting into a cloud environment (for example scalability of xen hypervisor to handle multiple VMs). Would be great if you can suggest me some tool (open source) for the above.

    Read the article

  • vgcreate --> "Command failed with status code 5." what does this mean?

    - by erik
    i'm playing around w/ LVM on a centos domU in a xen-based vps. I'm in rescue mode and I've created one physical volume (pvcreate /dev/xvda1) for my entire drive, which is formatted as LVM. i'm now trying to create a volume group using vgcreate main /dev/xvda1, but it's returning "Command failed with status code 5.". I've been unable to find an explanation for this error code. does anyone know what it means? for what it's worth, my goal is to create multiple logical volumes on my drive using lvm. thanks

    Read the article

  • Bridging with aliased Ethernet card for Virtualizing with single Ethernet card

    - by user113505
    We are having a server with good CPU and RAM,so we are planning to do XEN virtualization on ubuntu 12.04 server to handle high traffic. The plan is to keep the host machine only to manage VMs(no NAT ing). A New public IP will be assigned to that VM,For that i think we need a Bridge to external network(Since my Machine has only single ethernet card aliased with 4 different Pub IP's) Is it possible to create a bridge using aliased IP single ethernet card aliased to 4 pub IPs Do we need an additional Ethernet card to do Bridging.Only have ssh access to the machine. Any suggestions will be appreciated.

    Read the article

  • vmware server end of life, where to go now?

    - by matnagel
    We have some virtual machines on vmware server 2.x running on 64 bit hardware and quite happy with it. As vmware server will no longer be offered we are thinking to migrate to ESXi, which seems is free. We will have to install the specialized network cards but that's a minor problem. But once left alone with a quite silently discontinued product there is some resistance to vmware. VirtualBox seems to work: http://blogs.oracle.com/virtualization/2010/06/migrating_from_vmware_to_virtu.html What other free (of licencing cost) options are there? We have windows server 2003 32 bit VMs and also linux 32 and 64 bit VMs to migrate. So xen does not seem an option, which does not run microsoft OSes.

    Read the article

  • Implementing a Linux-HA based clustering setup on Windows

    - by Alex
    I have a (tried and tested) setup involving: 2x Load balancing nodes on a floating IP via Heartbeat, load balancing 2 tomcat servers. 2x Tomcat servers 2x Galera Cluster MySQL servers synchronously replicating (+1 arbitrator node) All are evenly spread across 2 physical nodes. Now, I have to somehow get the same functionality on Windows Server (2008? I think) nodes .... running under Xen virtualization. There is no possibility to use Linux for any of the nodes. I count two main problems: No Linux-HA hearbeat daemon for the load balancing No Galera synchronous replication for MySQL I freely admit to having nearly no Windows knowledge when it comes to clustering. Is there a way to closely mimic the setup I have described or is it a total write-off?

    Read the article

  • Changing the mac address in a libvirt xml config file breaks network connectivity for the guest

    - by foob
    I'm using Xen with libvirt and trying to set it up on a bridged interface. I am able to install an OS and everything works as I would expect. If I save the xml output from "virsh dumpxml guest", edit the mac address for the interface, and then define the domU with this new xml file I find that traffic is no longer forwarded from the vif0.0 interface to br0. The ifcfg-eth0 file on the guest was automatically updated to reflect the new mac address and the ifconfig output looks the same. Does anyone know why this is happening or how to properly change the mac address for a libvirt configuration?

    Read the article

  • smartctl or hddtemp for xvda [on hold]

    - by HST
    I'm trying to check the state of the drives on a remote server running Debian wheezy. I'm using a software RAID10 on top of, I guess, xen, since the entries in /dev are /dev/xvda and /dev/xvdb But it I try smartctl -a /dev/xvda I get /dev/xvda: Unable to detect device type Smartctl: please specify device type with the -d option. I've tried various device type guesses, none work Similar problem with hddtemp, which reports ERROR: /dev/xvda: can't determine bus type (or this bus type is unknown) I've searched the smartmontools documentation, but can't find any discussion of virtual disks. . . How do I get behind the virtualisation to something smart tools or hddtemp can work with?

    Read the article

  • ruby: invalid opcode

    - by adamo
    There's a fairly complex application than runs on two VMs (on Xen). Both VMs run CentOS 6.2 with the exact same packages and configuration for every application running (minus networking which is different). SELinux is disabled on both. On machine A the application builds perfectly. On machine B when running some tests we get: ruby[2010] trap invalid opcode ip:7ff9d2944c30 sp:7fff9797e0f8 error:0 in ld-2.12.so[7ff9d2930000+20000] Digging a bit more to find out where the machines differ, machine A has: model name : Six-Core AMD Opteron(tm) Processor 2423 HE and machine B: model name : AMD Opteron(TM) Processor 6272 I've tried booting machine B with cpuid_mask_cpu=fam_10_rev_c in grub but it did not help either. So any advice as to how to deal with this, or how to approach the hosting provider so as to run this VM on another physical machine will be greatly appreciated.

    Read the article

  • Virtualizing OpenSolaris with physical disks

    - by Fionna Davids
    I currently have a OpenSolaris installation with a ~1Tb RaidZ volume made up of 3 500Gb hard drives. This is on commodity hardware (ASUS NVIDIA based board on Intel Core 2). I'm wondering whether anyone knows if XenServer or Oracle VM can be used to install 2009.06 and get given physical access to the three SATA drives so that I can continue to use the zpool and be able to use the Xen bits for other areas. I'm thinking of installing the JeOS version of OpenSolaris, have it manage just my ZFS volume and some other stuff for work(4GB), then have a Windows(2GB) and Linux(1GB) VM (theres 8Gb RAM on that box) virtualised for testing things. Currently I am using VirtualBox installed on OpenSolaris for the Windows and Linux testing but wondered if the above was a better alternative. Essentially, 3 Disks - OpenSolaris Guest VM, it loads the zpool and offers it to the other VMs via CIFS.

    Read the article

  • How to make a vm scale when demand for resource increases

    - by Cray XT3
    i am having a server with 16 virtual core and 24G RAM,using Xen virtualization and ubuntu as dom0 Created 4 VMs (in para mode),each with different applications. CPU Load vary on each vm,somtimes first vm reaches nearly 100% CPU and others under 25% or even less. So is there a way in which vm can get cpu from other vms when they are not actually using it or utilization is under 25%.Same in the case of RAM also. I am not sure whether i am mentioning Cloud here. Initially i would like to give every vm a single VCPU,but can scale up to 8 or more by taking cpu from other vms if they are not using it. Is there any kind of tool that makes vm to scale its resources when demand increases. Is cloudstack and openstack designed for these kind of purpose or is that just a GUI to manage VMs.

    Read the article

  • Running SQL 2008 on a VM

    - by chris.w.mclean
    We are pondering trying to set up a SQL 2008 instance inside a VM for a production environment. All our SQL instances use iSCSI over gigabit ethernet to talk to a NAS, as would this new instance. Any reason this is a bad idea or any considerations to make this work well? The VM would be running in Xen 5.5 or we could set it up in Hyper-V if there's a compelling case for that. And the VM's VHD would be stored on a different NAS then the SQL storage is on.

    Read the article

  • How to setup a fast VPN server

    - by Saif Bechan
    I am trying to set up a VPN that has a fast download speed. The server I have is a linux server and from there I can download 2 megabytes a second. At home I can also download with 2 megabytes a second. All the downloads I do are from the same source, no different server. Now I have set up a VPN connection between my home and the server, and now I am only downloading 64 kilobytes a second! The connection I have created is a PPTP server on a debian machine. Now my question is if it is possible to optimize this connection. Should I maybe switch to OpenVPN, or change operating systems? Or are there some kind of settings to tweak to make the connection optimal. PS. The server I am running is on a XEN node. I have done the proper ip forwarding.

    Read the article

  • Corosync - stopping the service crashes the server

    - by Antipop
    I am trying to set up a test cluster on a Xen Server with 2 paravirtualized CentOS 5.4 machines. I am using Pacemaker+Corosync, and following the instructions found at http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf and other sites. Anyway, when I try to manually stop the corosync service, about 80% of the times the whole VM locks up with the message "Waiting for corosync services to unload" and I am forced to shut the machine down manually. For the remaining 20%, the VM keeps responding and adds dots to the above message, but it won't actually stop the service. There aren't many resources on the internet about this particular error. Any ideas about this? Thanks in advance.

    Read the article

  • VPS hang when one Virtual CPU usage is 100%

    - by garconcn
    We are using Xen Center to manage all of our cPanel VPS servers. The hardware has two CPUs(Intel(R) Xeon(R) CPU [email protected]) and 32GB memory. Each hardware has 4 cPanel VPS and each VPS has 8GB memory and 4 Virtual CPUS. Every one or two months, one of the VPS server will hang because one Virtual CPU usage is 100% and it couldn't release the CPU unless we use force reboot. We have 10 similar hardware, and this cause our server down almost every day. We have tried to avoid the Statistics Processing and Fantastico update during the night, but the problem still happens randomly. I can not find anything in the server log when it hangs. Any clue? Thank you.

    Read the article

  • Xen DomU does not have network connectivity

    - by Prakashkumar Thiagarajan
    I am trying to install Xen on my Fedora box. Dom0 image has network connectivity. But when I try to create a DomU, it does not have network connectivity. I want to be able to run in bridged mode. I have the /etc/xend/xend-config.sxp file accordingly. My config file looks like kernel = "/boot/vmlinuz-2.6.18-xenU" memory = 64 name = "clientA" vif = ['bridge=xenbr0,mac=12.34.56.78.9A.BC'] root = "/dev/sda1 ro" ramdisk = "/boot/initrd-linux.img" extra = "ro selinux=0.3 initcall_debug" features = 'auto_translated_physmap' Am I missing something ?

    Read the article

  • xl create doesn't bring up console

    - by ineff
    I've tryed to run VM in Xen 4.2 using xl command (for what I get this should be standard toolstack, while xm is deprecated). In this case I've the following configuration file kernel = '/media/home_separata/domU_kernel/boot/vmlinuz-linux' ramdisk = '/media/home_separata/domU_kernel/boot/initramfs-linux.img' name = "domU_Arch_linux" memory = "512" root = '/dev/xvda1 ro' disk = ['file:/media/home_separata/domU_kernel/arch_linux_kernel.img,xvda1,w'] vif = ['mac=aa:::10:11:f1,ip=192.168.0.2,bridge=xenbr0'] when I try to start the virtual machine with xl create it seems it works (it also bring up the vif interfaces) but if I try to connect via xl console it gives an error: xenconsole: Could not read tty from store: No such file or directory the fun fact is that the I've the problem inverse using xend/xm (in that case xend doesn't bring up vif interfaces but activate console). Does anyone have any suggestion?

    Read the article

  • Running commands on FreeBSD Live CD

    - by jmc
    I'm running FreeBSD 9.1-PRERELEASE on a vps running on XEN virtualization, I tried to update it to 9.1-RELEASE but mergemaster toasted my /etc/master.passwd and /etc/passwd so what i have now is a blank copies of the two files. What i did is use a mounted Live CD and mount my root partition to /mnt and manually re listed every entry to /mnt/etc/master.passwd and /mnt/etc/passwd from another freebsd server. I believe that everytime you edit master.passwd and passwd you have to run pwd_mkdb but this gives me "Read Only File" error. What I plan to do is enable PermitRootLogin and PermitEmptyPassword first so I can login as root first before I redo necessary changes again. But i have to run pwd_mkdb, so is there a way to run this command from Live CD?

    Read the article

  • Red Hat Kickstart: How do I Prevent partitioning?

    - by frio
    Hey all, I'm currently working on a new virtualisation setup using Xen, and Centos for my workplace. We intend to deploy the domUs into LVM volumes. Currently, the only thing preventing this from working as smoothly as we'd like is the Kickstart script's insistence on partitioning. This is the relevant part from our current KS template (which I've been messing with): # Partitioning clearpart --all --initlabel --drives=xvda part / --size=0 --grow --ondisk=xvda --fstype=ext3 This sets up a single partition and installs to it - which would be fine, but I'd prefer if there were no partitions, and installed directly to the existing LVM (so that we could then mount the LVM from the dom0 for backup and maintenance purposes). It's possible I'm doing something wrong, and should be exporting the volume as xvda1 rather than xvda - which I'm more than happy to amend - but I'm still not sure how I'd navigate the Kickstart! I'd really appreciate any help :). Cheers in advance!

    Read the article

  • Combining multiple linux boxes and create VMs out of it

    - by NS Gopikrishnan
    I am new to virtualization. I am running on ubuntu. I have a set of linux machines (5 to 6 machines). Which I want to combine as a single resource pool and on demand create multiple virtual instances of machines out of it. This is comparable to what VirtualBox does in a single system. I stumbled across many key words: Xen, Eucalyptus, OpenStack etc. But things are very vague as to which will help me achieve this requirement. Any help will be appreciated :) Thanks in advance!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >