ConVirt is an intuitive, graphical management tool providing comprehensive
life cycle management for Virtual Machines and virtualization infrastructures.ConVirt is built on the firm design
philosophy that ease-of-use and sophistication can, and should,
co-exist in a single management tool. With its central console paradigm, performance and configuration dashboard, soup-to-nuts Virtual Machine
lifecycle management, integrated Virtual Appliance Catalogue, and many more great features, ConVirt should prove an
invaluable tool for seasoned administrators as well as those
new to virtualization.
VM |
Virtual machine (Xen Virtual Machine in this context) |
Server |
Refers to a physical host or machine |
Client Machine |
Machine on which the ConVirt is invoked is referred to as client machine. |
Managed Server |
Server to be managed using ConVirt. |
Server Pool |
A collection of managed Servers typically owned/managed by a department or used to provide related services/applications. |
Xen 3.2 support
ConVirt 0.8.2 supports Xen 3.2
- CPU utilization reporting incorrect percentages
- Linux Desktop crashing if VNC viewer is missing
- Login failure when multiple keys added to the ssh-agent
Automatic Announcements for Updates
This is a new tool to help you stay informed about ConVirt activities without having to repeatedly visit the ConVirt website. Whenever new releases or critical updates are available, ConVirt will notify you by showing them in a "Convirt Updates" window. You can turn off this feature by changing the setting in the ConVirt configuration file. To manually check for updates, select the "Help->Check for Updates" menu item.
ConVirt is distributed both as a source/binary tarball and as a binary rpm for a few specific distributions. Please checkout the distribution specific notes in the Appendix
1. cd to the location where you've extracted the tarball.
e.g.> cd
~/convirt.0.8.2/
2
. If you haven't already, deploy python-paramiko
(an SSH client library ConVirt uses for remote management) in your
environment. Python-Paramiko is available for download at:
http://www.lag.net/paramiko.
(follow the installation instructions at the site carefully).
Here are useful links for the rpms.
http://rpm.pbone.net/index.php3/stat/4/idpl/3714841/com/python-crypto-2.0.1-15.i586.rpm.html
http://rpm.pbone.net/index.php3/stat/4/idpl/5215518/com/python-paramiko-1.7.1-0.pm.1.i586.rpm.html
There is a bug in paramiko library used by ConVirt. The ConVirt
distribution contains necessary files to patch this bug. In order
to apply this patch, run patch_paramiko script as root. This patch is
needed only on the ConVirt/client node.
# cd <convirt
install>/patches
# ./patch_paramiko
3
. Make sure ConVirt has
execute permissions:
e.g. > chmod
0755 ./ConVirt
4
. Run ConVirt and enjoy!
e.g. > ./ConVirt
By default ConVirt would use the image store that is in the location where you untared the tarball, so this step is not required. But if you want to use a different location with more disk space, use the mk_image_store utility to create a copy of the Image Store (ConVirt's provisionable
Virtual Machine Image repository)
e.g. > sh ./mk_image_store <path for image store>
This will
create the Image Store at the location specified.
If you run the script without any argument, it would create the image store under ~/.convirt location. If you are a root user, it would create the image store at /var/cache/convirt.
Note : You need to modify the convirt.conf to point to the new location of the image store. Change the image_store and appliance_store values in the convirt.conf.
Refer to Initial Setup section under the Image Store.
The following steps needs to be done as a root user.
1. Remove any earlier versions of convirt from your
environment.
e.g.> # rpm
-e xenman convirt
2. cd to the directory you've downloaded the
rpm package to and
install ConVirt:
e.g.> # rpm -Uvh
convirt-0.8.2-1.fedora.noarch.rpm
3. If you intend to manage (Xen enabled) local machine, you are
good to go. Run ConVirt as a root user.
e.g.>
# convirt
If you intend to use convirt from a non-root user, then you can not manage the local machine. Also, in this case, you need to set up your own private image store. The following command would create an image store under ~/.convirt directory. Note: This would be executed as a non-root user.
e.g.> $ mk_image_store
e.g.> $ convirt
NOTE: ConVirt
requires the python-paramiko package for remote operations over SSH.
A python-paramiko rpm package is available for most distro's;
however, if you cannot locate a suitable package for your distro, you
can manually deploy it (see section I.2 above), and then run rpm with
dependency checks disabled.
e.g.>rpm
-Uvh convirt-0.8.2-1.fedora.noarch.rpm --nodeps
Refer to Initial Setup section under the Image Store.
Prerequisites for running ConVirt : (Dont forget to consult Appendix for Linux distribution specific notes)
Client Machine :
Xen 3.0.4 or later installed.
Xen 3.0.4 or later (booted in to Xen kernel and Xend running). Xen 3.1 and above is recommended.
This section describes some additional setup for managing multiple
Servers. This assumes that you have configured each of your managed servers as per previous section.
This is NOT required if you want to manage virtual
machines on local host/client machine only.
ConVirt uses ssh to read a bunch of configuration
files, create VBDs, LVMs on the remote node. Basically all managed servers need to trust all client
machines. (Yes, u can have more than one client machines)
From client machine, ssh to managed server from the client machine
using the account
from which convirt would be started.
# ssh <managed server name>
This will prompt you to add the key to the known_hosts. Say yes.
This will add the /etc/ssh_host_key.pub from the managed server
to users $HOME/.ssh/known_hosts on client machine
(Alternatively you can manually add
it)
## Repeat above steps for each managed
server.
If you want to use password based
authentication, the you are done.
You can skip the
rest of the section.
For a small environment it
may be OK to use password based authentication, but in a
large setup we recommend using key based authentication for convenience and tractability.
Refer to SSH
manuals and on line material for setting up key based trust and using ssh-agent.
Here are couple of useful
URL's
http://www.suso.org/docs/shell/ssh.sdf
http://www.linux.ie/articles/tutorials/ssh.php
While managing remote servers, there are operations like, "Open
VM File" etc that would require selecting a file on a remote
server. For GNOME users, ConVirt uses gnome-vfs to browse files on the
remote server. As this is done on a separate channel it would require user to enter the password again for the
managed server. When prompted for saving the password, it is
recommended not to save password in key-ring for security
reasons.
The user is NOT prompted for password if the key
based authentication is used. The user experience is quite seamless
between localhost vs remote managed server management.
There is a bug in paramiko library used by ConVirt. The ConVirt
distribution contains necessary files to patch this bug. In order
to apply this patch, run patch_paramiko script as root. This patch is
needed only on the ConVirt/client node.
# cd <convirt
install>/patches
# ./patch_paramiko
The Dashboard is a consolidated listing of all known managed servers along with critical performance, availability and configuration metrics for each. It provides the user the ability to ascertain the state of his/her entire deployment at a glance. In addition, most common administrative tasks can be launched by right clicking on a server.
Launch. The Dashboard may be launched by selecting 'Server Pool' in the navigator on the left and then clicking on the Summary Tab on the right hand side. (Note: Upon startup, ConVirt launches the Dashboard by default).
Operations: Left-Clicking a row in the Dashboard selects the associated managed server. The following actions may be then be performed:
Double-Click: Connect to the managed server (if necessary) and drill down into a more detailed view. This selects the server's node in the navigator on the left hand side and brings up the Summary tab for the server on the right.
Right-Click: Context sensitive menu. Most server operations can be executed directly here.
Sorting: Clicking on the column header will re-sort the listing based on the clicked column. (not available for all columns)
Data. Each row in the Dashboard corresponds to a managed server. The fields are:
Server. The name of the managed server.
Connection. Connectivity status to the managed server ( i.e. whether ConVirt has an active connection to the server).
VM Summary. A compact listing of VM status on the server. Total(known)/Running/Paused/Crashed respectively.
VM CPU(%). Aggregate processor usage by VM's running on the server. (Does not include the host OS/Domain-0's processor usage).
VM Mem(%). Aggregate memory usage by VM's running on the server. (Does not include the host OS/Domain-0's memory usage).
Server CPU(s). Number and clock speed of the physical processors on the managed server. (if available)
Server Mem. Total, usable physical memory installed on the managed server. (if available)
Version. The Virtualization platform version being reported by the managed server.
* Most of the data fields in the Dashboard listing are available for managed servers running Xen v3.0.4 or above. For servers running earlier versions of Xen, most fields will display N/A.
A Server Pool node on left navigator represents a group of managed servers.
The following operations are allowed on a managed server.
ConVirt can be invoked from either root user or non-root user account
Root user : When ConVirt is invoked from root user, it can manage local machine as well as other managed servers for which credentials need to be provided.
Non root user : When ConVirt is invoked from non-root account, it can not manage VMs on on the local machine. However, this user can manage other managed servers. Wherever necessary, ConVirt prompts for credentials..
ConVirt now allows VM configurations or running Virtual machines to be migrated to another server. This allows the administrator to re-organize the virtual machine to physical machine relationship to balance the workload. For example, a VM needing more CPU can be moved to a machine having more cpu cycles. For VM Migration to succeed the following points must be considered.
Shared storage for all GuestOS disks
Identical mountpoints on all servers (hosts)
The kernel and ramdisk when using para-virtualized virtual machines should also be shared.(This is not required if pygrub is used.)
Centrally accesible installation media (iso)
Preferably use identical machines with same version of Virtualization platform running (preferably Xen 3.1 and above)
Migrate All : There is also a convenient feature to allow migrating all Virtual machines. This is useful particularly when a Server is to be upgraded or brought down for maintenance.
Administrators are constantly faced with selecting an appropriate Server to create a new VM. They have to look at the utilization of resources across various servers and pick one so as to optimize the resource utilization. This becomes very difficult as the number of servers grow. ConVirt helps with this problem by providing a "Provision VM" operation on a Server Pool. This advanced feature, selects an appropriate server for the provisioning operations. It uses CPU, Memory and number of configured Vms on the server while making the selection. This smart placement of VM help achieve even distribution of workload among available servers.
ConVirt allows administrators to define their images and create
Virtual Machine configurations from them. For example, in a
particular data center, four types of machines are to be provisioned
frequently. They are Red hat, CentOS, Suse and Ubuntu. You can
configure ConVirt to point to kernel and ramdisk of each of
these distributions and install/deploy many virtual machines using
predefined images. The collection of images are referred to as
an Image Store.
ConVirt ships with a default image store
containing a few useful provisionable images. One can either edit them in place or use the new "Create Like" feature to create an image that is similar to one of the existing images.
Sophisticated users may also construct their own, arbitrarily complex image descriptions and provisioning schemes and add them to the Image Store. (For instructions on how to build custom provisionable images, please consult the 'Image Builder's Guide,' a part of this documentation set).
Location. The Image Store is listed in the navigator. Clicking or expanding the Image Store node results in a listing of the available, provisionable images. Right click on the Image Store presents a menu with "Import Appliance" see the next section for details on appliance management.
* a detailed
description of the configuration/mechanism distinction and how
parameters may be shared between the two is an advanced topic, more
fully addressed in the 'Image Builder's Guide.'
Initial setup : Out of the box the images supplied would work with default values. You might want to do some initial setup to suite to your environment. There are few options in the provisioning area that are worth mentioning.
- VM_DISKS_DIR : Location where disks for newly created VMs would be placed. Default /tmp.
- VM_CONF_DIR : Directory where the VM config files would be placed. Default /etc/xen
- http_proxy : Proxy to be used while provisioning. Default to no proxy (blank). The format for the proxy is http://myproxyserver:myproxyport
Another thing you might want to customize is to make sure that the device for the cdrom is correct for images that install from a CDROM. It is usually /dev/cdrom, but it might be different on your machine. You can make this change in the disk area under the general tab.
Template : The image store also contains a _template_ image. This is a skeleton for an image and can not be provisioned directly. One should populate the template with suitable values and then use create like from the template to create one or multiple images. This particular template is encouraging shared area for the vm disks, one of the primary requirements for migration.
Appliance Templates : See the section under Appliance Management.
Appliances are fully configured application images that can be deployed on either physical or virtual machines. Their out of the box experience makes them very attractive solution. They are becoming exceedingly popular as they reduce the costs for the software vendors as well as IT cells using them.This version of ConVirt aims to simplify Appliance Lifecycle Management.
Appliance Catalog : This version has an online appliance catalog containing few popular rPath appliances. Each entry in the catalog contains url from where the appliance can be downloaded, description and a bunch of technical metadata. The catalog also has a search/filter capability to find the appliance that you are looking for. Once you find what you are looking for, simply pick import it into the ConVirtrt Image Store. Once in the store, multiple instances of appliances can be provisioned as any other image. It shares all the flexibility and configurability offered by the provisioning scheme described in the previous section. We would be adding more appliance vendors soon.
Appliance Management shortcuts : Once the rPath Appliance is provisioned, the context sensitive menu for Appliance Lifecycle Management shows up. It contains short cuts to the application as well as following common appliance management operations.
This menu is available only when the appliance is running. Out of the box the appliances are configured to obtain a dynamic address using dhcp. When any of the above mentioned menu is invoked for the first time, you are prompted for entering the ip address/hostname of the VM. You can find this information mostly from the console messages when the appliance boots up. You can use the Specify Details menu item to enter this information apriori or change the information after the fact.
Manual Specification : For importing appliances that not in the catalog, one can use the "Specify Manually" option to specify the URL manually and specifying some additional information. For example :
Importing a reference Disk / Cloning a VM:
If you have a reference disk or want to clone an already provisioned VM, you can use the Manual Specification method mentioned above. Simply specify the path of the reference disk as a url and choose 'Other' as a provider. It is preferred to gzip the disk image.
For example, I can specify the following parameters for cloning the gold image for our employee desktop.
Appliance Templates : When an Appliance is imported, appliance_vm_conf.template, appliance_hvm_conf.template, appliance_image_conf.template and appliance_desc.template are used. You may want to customize these to suite your environment.
This section contains default directory/file locations. Some important one's include:
This section keeps application specific data.
This section contains server environment information.
The config file has section for each manged server. This contains
information required by ConVirt to connected managed servers. This
sections are relevant only on client machines.
example :
[192.168.0.102]
is_remote = True
login =
root
xen_port = 8005
ssh_port = 22
use_keys=False
This section contains items specific to client/user preferences.
console
can be used to put the log in the console window.
Additional
Notes:
Upon startup, ConVirt looks for convirt.conf, in order, at
the
following locations:
./convirt.conf
- (current directory)
~/.convirt/convirt.conf - (user's home
directory)
/etc/convirt.conf -
(global location)
If it doesn't find a valid, writable
configuration file, ConVirt creates a default file under
the current directory.
Always specify full path of the files, as if seen from the managed server. Avoid using ~ to refer to home directory.
Here is a list of platforms on which ConVirt is tested.
Client Platforms
Fedora 7, Fedora 8, Ubuntu Gutsy7.10, OpenSUSE 10.3
Server Platforms
Fedora 7, Fedora 8, CentOS/RHEL 5.1, Debian Etch 4, OpenSUSE 10.3
Xen
Xen 3.0.4 / 3.1/3.2
(Xen 3.2 tested on FC8 only)
GNU General Public License (GPL)
For details, see:
http://www.gnu.org/licenses/gpl.html
Do drop us a line, if you download/evaluate/use ConVirt. We would
appreciate feedback on the current release as well as suggestions for
future releases.
Also, we are hoping for active community
assistance in the following areas:
- packaging for more
platforms and distributions. (e.g. debian, windows, etc.)
-
ImageStore images (kernel/ramdisk pairs) for various pre-packaged
Guest Os/VM's.
The best way to reach us is to pop in and say
hi at our (low frequency) mailing list. To sign up, visit
http://convirt.sf.net/. We look forward to hearing from you!
Xen is registered trademark of XenSource Inc
rPath is trademark of rPath.
paramiko
library : By Robey Pointer
htmltextview.py : By Gustavo J. A. M.
Carneiro
TreeViewToolTip.py : By Daniel J. Popowich
Migration Icon : By Luis Vinay
Appliance Feed Setup : By Hozefa Shiyaji
This Appendix contains some notes and useful sugestions to get Xen and ConVirt envirnment running on different distributions.
<Fault 1: 'method "xend.domains_with_state" is not supported'>To fix this :
[DEFAULT]
default_computed_options = ['arch', 'arch_libdir', 'device_model']
use_3_0_api=True
The Ubuntu Gutsy repository has Xen available in the repository. To get it,
# apt-get install ubuntu-xen-server
# apt-get install python-paramiko
And follow the tarball installation instruction for ConVirt with the following caveats. (till ConVirt becomes available in the repository)
The patch_paramiko script assumes paramiko to be installed under /usr/lib/python directory. This is not true for Ubuntu. These are found unde /var/lib/python-support/python2.5/paramiko. The paramiko version currently in is version 1.6.4. So to apply the patch manually do the following
# cd /var/lib/python-support/python2.5/paramiko
# cp packet.py packet.py.orig
# cp /path/to/convirt/patches/paramiko.packet.py.1.6.4
To use it as a managed server or do management of the local host on which ConVirt is installed,
1. run the config script script under config-script directory.
# ./configure-xend.sh 3.1
2. The hvmloader and the qemu-dm path are not as expected. Create the following symlinks to fix the problem.
# cd /usr/lib/xen
# ln -s ../xen-ioemu-3.1/boot boot
# cd bin
# ln -s ../../xen-ioemu-3.1/bin/qemu-dm qemu-dm
3. Take a look at the Validate the Bridge Setup under Debian section.
# apt-get remove exim4 exim4-base lpr nfs-common portmap pidentd pcmcia-cs pppoe pppoeconf ppp pppconfig
# apt-get install screen ssh debootstrap python python-twisted iproute bridge-utils libcurl3-dev libssl0.9.7
Download the binary bits and install it. This example shows downloading a PAE xen bits.
# cd /usr/src
# wget http://bits.xensource.com/oss-xen/release/3.1.0/bin.tgz/xen-3.1.0-install-x86_32p.tgz
# tar xvzf xen-3.1.0-install-x86_32.tgz
# cd dist/
# ./install.sh
# mv /lib/tls /lib/tls.disabled
Lets create the ramdisk
# depmod 2.6.18-xen
# apt-get install yaird
# mkinitramfs -o /boot/initrd.img-2.6.18-xen 2.6.18-xen
Add Xen in the start up scripts
# update-rc.d defaults xend 20 21
# update-rc.d defaults xendomains 20 21
Update the grub file
# update-grub
Loop devices setup : To use file based disks for virtual machines there should be enough number of /dev/loop devices
Check if the kernel has loop module
# lsmod | grep loop
If your kernel has loop compiled in, the above command would not show anything. In thiscase,
Edit the /boot/grub/menu.lst file to add max_loop=64 to the kerenl (in a module line). example
module /boot/vmlinuz-2.6.18-xen root=/dev/sda7 ro console=tty0 max_loop=64
Otherwise you will see a line containing loop with couple of numbers. In this case,
Edit /etc/modules file and add the following line
loop max_loop=64
Restart the machine. This should show Xen 3.1.0 as one of the menu item.
Configure the managed node :
Get the ConVirt tarball (if you are reading this online) and run the following from the config-scripts directory.
# ./configure-xend.sh 3.1
The VMs use the bridge to communicate to other vm or other servers. This section help validate/setup the bridge.
# brctl show
This command should show vif0.0 and peth0 in the last column (interfaces). If it is the case, you are done.
But, If not, then your VMs may not be able to acess the network.
Here are couple of suggestions to fix this
Add the following to the /etc/network/interfaces
file
## To use dhcp:
Move a couple of udev rules to prevent different eth being created on every boot.
##
auto eth0
iface eth0 inet dhcp
(Some discussion on this Ethernet numbering, Eth numbering)
# cd /etc/udev/rules.d
Reboot the machine and check brctl show output.
# mkdir backup
# mv *_persistent-net.rules backup
# mv *_persistent-net-generator.rules backup
The pygrub bootloader is not available on the SUSE platform. Instead, there is a domUloader.py located at /usr/lib/xen/boot directory. Once the installation of PV images is complete, you would have to change the bootloader manually to /usr/lib/xen/boot/domUloader.py. Also, the bootloader requires bootargs param containing the location of path to kernel and ramdisk within the VM image. The bootargs can be specified in the Miscellaneous tab.
Here is an exmple entry :
bootargs =--entry=xvda1:/vmlinuz,/initrd.img
Note: The domUloader is not able to mount the rPath Appliance images. This prevents it from being able to fetch the kernel and ramdisk from the appliance and start it. We would publish a set of steps once the domUloader issue is resolved.
The Xen 3.2 has been tested on the Fedora 8 platform. One of the thing notably changed is the the name of the bridge. So if you are using the Xen 3.2, change the name of the bridge in the images to eth0.
If the managed server is Xen 3.2, use the config scripts under config-scripts/xen-3.2 directory.