FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Fedora/Linux Management Tools

 
 
LinkBack Thread Tools
 
Old 07-30-2008, 08:37 PM
Cole Robinson
 
Default virt-install: remote guest creation

I've taken a stab at getting remote guest creation up and running
for virt-install. Most of the existing code translates well to the
remote case, but the main issue is storage: how does the user tell
us where to create and find existing storage/media, and how can we
usefully validate this info. The libvirt storage API is the lower
level mechanism that allows this fun stuff to happen, its really
just a matter of choosing a sane interface for it all.

The two interface problems we have are:

- Changes to VirtualDisk to handle storage apis
- Changes to virt-install cli to allow specifying storage info

For VirtualDisk, I added two options
- volobj : a libvirt virStorageVol instance
- volinstall : a virtinst StorageVolume instance

If the user wants the VirtualDisk to use existing storage, they
will need to query libvirt for the virStorageVol and pass this
to the VirtualDisk, which will take care of the rest.

If the user wants to create a new managed volume, they populate
a StorageVolume object (posted earlier but not yet committed)
and pass this VirtualDisk. VirtualDisk will map the setup and
is_size_conflict commands as appropriate, so all current
infrastructure will just work.

Now there wasn't a lot of functionality that needed to be added
to accomplish this, but VirtualDisk was becoming unmaintainable
so I took this opportunity to break it out into it's own file
and clean it up quite a bit. I also added a parent VirtualDevice
class which I will move all the other device classes over to in
the future, but it's not an immediate priority.

The other choices here are to offload looking up storage volumes
to VirtualDisk, or maybe add an option to attempt to lookup
the 'path' parameter as a storage volume if we are on a remote
connection. These could just be options added later though.


The next piece is how the interface changes for virt-install.
Here are the storage use cases we now have:

1) use existing non-managed (local) disk
- signified by --file /some/real/path

2) create non-managed (local) disk
- signified by --file /some/real/dir/idontexist

3) create managed disk
- all we would really need is the pool name to install on.

4) use existing managed disk
- could be a pool name, vol name combo, or perhaps
even an absolute path representing a volume.

5) use existing non-managed media (cdrom)
- signified by --cdrom /some/real/path

6) use existing managed media
- same syntax as existing managed disk

The options I see are:

A) overload existing options (--file, --cdrom): we can detect we
are using a remote connection, and try to lookup a passed path
as a volume. if the user wants to specify pool/vol by name, we
can use something like 'poolname:volname' for some reasonable
delimiter. however this could collide with some legitimate
storage names so it may not feasible. We could always get
fancy and allow escaping characters though.

B) Add extra options. To completely get away without having to
add some 'poolname:volname' type format as above, we would
probably need a --pool-name and --vol-name, and someway to
indicate that we want these for cdrom media as well :/

I've currently been testing this with a hacked up version of A.

The only remaining issue is some trouble with Guest objects
expecting the install location to be local, but I'm still
playing with that.

Attached are the new VirtualDisk and VirtualDevice files, this
all works and passes validation testing but I haven't given it
any real polish yet.

Any feedback is appreciated.

Thanks,
Cole


#
# Base class for all VM devices
#
# Copyright 2008 Red Hat, Inc.
# Cole Robinson <crobinso@redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301 USA.

import libvirt
import logging

import CapabilitiesParser
import util
from virtinst import _virtinst as _

class VirtualDevice(object):
"""
Base class for all domain xml device objects.
"""

def __init__(self, conn=None):
"""
@param conn: libvirt connection to validate device against
@type conn: virConnect
"""

if conn:
if not isinstance(conn, libvirt.virConnect):
raise ValueError, _("'conn' must be a virConnectPtr instance")
self._conn = conn

self.__remote = None
if self.conn:
self.__remote = util.is_remote(self.conn.getURI())

self._caps = None
if self.conn:
self._caps = CapabilitiesParser.parse(self.conn.getCapabilities ())

def get_conn(self):
return self._conn
conn = property(get_conn)

def _is_remote(self):
return self.__remote

def _check_bool(self, val, name):
if val not in [True, False]:
raise ValueError, _("'%s' must be True or False" % name)

def _check_str(self, val, name):
if type(val) is not str:
raise ValueError, _("'%s' must be a string, not '%s'." %
(name, type(val)))

def get_xml_config(self):
"""
Construct and return device xml

@return: device xml representation as a string
@rtype: str
"""
raise NotImplementedError()
#
# Classes for building disk device xml
#
# Copyright 2006-2008 Red Hat, Inc.
# Jeremy Katz <katzj@redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301 USA.

import os, stat, statvfs
import libxml2
import logging
import libvirt
import __builtin__

import util
import Storage
from VirtualDevice import VirtualDevice
from virtinst import _virtinst as _

class VirtualDisk(VirtualDevice):
DRIVER_FILE = "file"
DRIVER_PHY = "phy"
DRIVER_TAP = "tap"

DRIVER_TAP_RAW = "aio"
DRIVER_TAP_QCOW = "qcow"
DRIVER_TAP_VMDK = "vmdk"

DEVICE_DISK = "disk"
DEVICE_CDROM = "cdrom"
DEVICE_FLOPPY = "floppy"
devices = [DEVICE_DISK, DEVICE_CDROM, DEVICE_FLOPPY]

TYPE_FILE = "file"
TYPE_BLOCK = "block"
types = [TYPE_FILE, TYPE_BLOCK]

def __init__(self, path=None, size=None, transient=False, type=None,
device=DEVICE_DISK, driverName=None, driverType=None,
readOnly=False, sparse=True, conn=None, volobj=None,
volinstall=None):
"""@param path: is the path to the disk image.
@type path: str
@param size: size of local file to create
@type size: long (in gigabytes)
@param transient: test
@param type: media type (file, block, ...)
@type type: str
@param device: none
@param driverName: none
@param driverType: none
@param readOnly: none
@param sparse: none
@param conn: none
@param volobj: none
@param volinstall: none"""

VirtualDevice.__init__(self, conn=conn)
self.set_read_only(readOnly, validate=False)
self.set_sparse(sparse, validate=False)
self.set_type(type, validate=False)
self.set_device(device, validate=False)
self.set_path(path, validate=False)
self.set_size(size, validate=False)

self.transient = transient
self._driverName = driverName
self._driverType = driverType
self.target = None
self.volobj = volobj
self.volinstall = volinstall

self.__validate_params()


def __repr__(self):
return "%s:%s" %(self.type, self.path)



def get_path(self):
return self._path
def set_path(self, val, validate=True):
if val is not None:
self._check_str(val, "path")
val = os.path.abspath(val)
self.__validate_wrapper("_path", val, validate)
path = property(get_path, set_path)

def get_size(self):
return self._size
def set_size(self, val, validate=True):
if val is not None:
if type(val) not in [int, float, long] or val < 0:
raise ValueError, _("'size' must be a number greater than 0.")
self.__validate_wrapper("_size", val, validate)
size = property(get_size, set_size)

def get_type(self):
return self._type
def set_type(self, val, validate=True):
if val is not None:
self._check_str(val, "type")
if val not in self.types:
raise ValueError, _("Unknown storage type '%s'" % val)
self.__validate_wrapper("_type", val, validate)
type = property(get_type, set_type)

def get_device(self):
return self._device
def set_device(self, val, validate=True):
self._check_str(val, "device")
if val not in self.devices:
raise ValueError, _("Unknown device type '%s'" % val)
self.__validate_wrapper("_device", val, validate)
device = property(get_device, set_device)

def get_driver_name(self):
return self._driverName
driver_name = property(get_driver_name)

def get_driver_type(self):
return self._driverType
driver_type = property(get_driver_type)

def get_sparse(self):
return self._sparse
def set_sparse(self, val, validate=True):
self._check_bool(val, "sparse")
self.__validate_wrapper("_sparse", val, validate)
sparse = property(get_sparse, set_sparse)

def get_read_only(self):
return self._readOnly
def set_read_only(self, val, validate=True):
self._check_bool(val, "read_only")
self.__validate_wrapper("_readOnly", val, validate)
read_only = property(get_read_only, set_read_only)



# Validation assistance methods
def __validate_wrapper(self, varname, newval, validate=True):
try:
orig = getattr(self, varname)
except:
orig = newval
setattr(self, varname, newval)
if validate:
try:
self.__validate_params()
except:
setattr(self, varname, orig)
raise

# Detect file or block type from passed storage parameters
def __set_dev_type(self):
dtype = None
if self.volobj:
# vol info is [ vol type (file or block), capacity, allocation ]
t = self.volobj.info()[0]
if t == libvirt.VIR_STORAGE_VOL_FILE:
dtype = self.TYPE_FILE
elif t == libvirt.VIR_STORAGE_VOL_BLOCK:
dtype = self.TYPE_BLOCK
else:
raise ValueError, _("Unknown storage volume type.")
elif self.volinstall:
if isinstance(self.volinstall, Storage.FileVolume):
dtype = self.TYPE_FILE
else:
raise ValueError, _("Unknown dev type for volinstall.")
elif self.path:
if stat.S_ISBLK(os.stat(self.path)[stat.ST_MODE]):
dtype = self.TYPE_BLOCK
else:
dtype = self.TYPE_FILE

logging.debug("Detected storage as type '%s'" % dtype)
if self.type is not None and dtype != self.type:
raise ValueError(_("Passed type '%s' does not match detected "
"storage type '%s'" % (self.type, dtype)))
self.set_type(dtype, validate=False)

def __validate_params(self):

if self._is_remote() and not (self.volobj or self.volinstall):
raise ValueError, _("Must specify libvirt managed storage if on "
"a remote connection")

if self.device == self.DEVICE_CDROM:
logging.debug("Forcing '%s' device as read only." % self.device)
self.set_read_only(True, validate=False)

# Only floppy or cdrom can be created w/o media
if self.path is None and not self.volobj and not self.volinstall:
if self.device != self.DEVICE_FLOPPY and
self.device != self.DEVICE_CDROM:
raise ValueError, _("Device type '%s' requires a path") %
self.device
# If no path, our work is done
return True

if self.volinstall:
logging.debug("Overwriting 'size' with 'capacity' from "
"passed StorageVolume")
self.set_size(self.volinstall.capacity*1024*1024*1 024,
validate=False)

if self.volobj or self.volinstall or self._is_remote():
logging.debug("Using storage api objects for VirtualDisk")
using_path = False
else:
logging.debug("Using self.path for VirtualDisk.")
using_path = True

if ((using_path and os.path.exists(self.path))
or self.volobj):
logging.debug("VirtualDisk storage exists.")

if using_path and os.path.isdir(self.path):
raise ValueError, _("The path must be a file or a device,"
" not a directory")
self.__set_dev_type()
return True

logging.debug("VirtualDisk storage does not exist.")
if self.device == self.DEVICE_FLOPPY or
self.device == self.DEVICE_CDROM:
raise ValueError, _("Cannot create storage for %s device.") %
self.device

if using_path:
# Not true for api?
if self.type is self.TYPE_BLOCK:
raise ValueError, _("Local block device path must exist.")
self.set_type(self.TYPE_FILE, validate=False)

# Path doesn't exist: make sure we have write access to dir
if not os.access(os.path.dirname(self.path), os.W_OK):
raise ValueError, _("No write access to directory '%s'") %
os.path.dirname(self.path)
else:
self.__set_dev_type()

if not self.size:
raise ValueError, _("'size' is required for non-existent disks")
ret = self.is_size_conflict()
if ret[0]:
raise ValueError, ret[1]
elif ret[1]:
logging.warn(ret[1])



def setup(self, progresscb):
""" Build storage media if required"""
if self.volobj:
return
elif self.volinst:
self.volinst.install(meter=progresscb)
return
elif self.type == VirtualDisk.TYPE_FILE and self.path is not None
and not os.path.exists(self.path):
size_bytes = long(self.size * 1024L * 1024L * 1024L)
progresscb.start(filename=self.path,size=long(size _bytes),
text=_("Creating storage file..."))
fd = None
try:
try:
fd = os.open(self.path, os.O_WRONLY | os.O_CREAT)
if self.sparse:
os.lseek(fd, size_bytes, 0)
os.write(fd, 'x00')
progresscb.update(self.size)
else:
buf = 'x00' * 1024 * 1024 # 1 meg of nulls
for i in range(0, long(self.size * 1024L)):
os.write(fd, buf)
progresscb.update(long(i * 1024L * 1024L))
except OSError, e:
raise RuntimeError, _("Error creating diskimage %s: %s" %
(self.path, str(e)))
finally:
if fd is not None:
os.close(fd)
progresscb.end(size_bytes)
# FIXME: set selinux context?

def get_xml_config(self, disknode):
typeattr = 'file'
if self.type == VirtualDisk.TYPE_BLOCK:
typeattr = 'dev'

path = self.path
if self.volobj:
path = self.volobj.path()
elif self.volinstall:
path = self.volinstall.target_path
if path:
path = util.xml_escape(path)

ret = " <disk type='%(type)s' device='%(device)s'>
" % { "type": self.type, "device": self.device }
if not(self.driver_name is None):
if self.driver_type is None:
ret += " <driver name='%(name)s'/>
" % { "name": self.driver_name }
else:
ret += " <driver name='%(name)s' type='%(type)s'/>
" % { "name": self.driver_name, "type": self.driver_type }
if path is not None:
ret += " <source %(typeattr)s='%(disk)s'/>
" % { "typeattr": typeattr, "disk": path }
if self.target is not None:
disknode = self.target
ret += " <target dev='%(disknode)s'/>
" % { "disknode": disknode }
if self.read_only:
ret += " <readonly/>
"
ret += " </disk>
"
return ret

def is_size_conflict(self):
"""reports if disk size conflicts with available space

returns a two element tuple:
first element is True if fatal conflict occurs
second element is a string description of the conflict or None
Non fatal conflicts (sparse disk exceeds available space) will
return (False, "description of collision")"""

if self.volobj or self.size is None or not self.path
or os.path.exists(self.path) or self.type != self.TYPE_FILE:
return (False, None)

if self.volinstall:
return self.volinstall.is_size_conflict()

ret = False
msg = None
vfs = os.statvfs(os.path.dirname(self.path))
avail = vfs[statvfs.F_FRSIZE] * vfs[statvfs.F_BAVAIL]
need = long(self.size * 1024L * 1024L * 1024L)
if need > avail:
if self.sparse:
msg = _("The filesystem will not have enough free space"
" to fully allocate the sparse file when the guest"
" is running.")
else:
ret = True
msg = _("There is not enough free space to create the disk.")


if msg:
msg += _(" %d M requested > %d M available") %
((need / (1024*1024)), (avail / (1024*1024)))
return (ret, msg)

def is_conflict_disk(self, conn):
vms = []
# get working domain's name
ids = conn.listDomainsID();
for id in ids:
try:
vm = conn.lookupByID(id)
vms.append(vm)
except libvirt.libvirtError:
# guest probably in process of dieing
logging.warn("Failed to lookup domain id %d" % id)
# get defined domain
names = conn.listDefinedDomains()
for name in names:
try:
vm = conn.lookupByName(name)
vms.append(vm)
except libvirt.libvirtError:
# guest probably in process of dieing
logging.warn("Failed to lookup domain name %s" % name)

path = self.path
if self.volobj:
path = self.volobj.path()
elif self.volinstall:
path = self.volinstall.target_path
if path:
path = util.xml_escape(path)

count = 0
for vm in vms:
doc = None
try:
doc = libxml2.parseDoc(vm.XMLDesc(0))
except:
continue
ctx = doc.xpathNewContext()
try:
try:
count += ctx.xpathEval("count(/domain/devices/disk/source[@dev='%s'])" % path)
count += ctx.xpathEval("count(/domain/devices/disk/source[@file='%s'])" % path)
except:
continue
finally:
if ctx is not None:
ctx.xpathFreeContext()
if doc is not None:
doc.freeDoc()
if count > 0:
return True
else:
return False



# Back compat class to avoid ABI break
class XenDisk(VirtualDisk):
pass
_______________________________________________
et-mgmt-tools mailing list
et-mgmt-tools@redhat.com
https://www.redhat.com/mailman/listinfo/et-mgmt-tools
 
Old 07-30-2008, 09:17 PM
Michael DeHaan
 
Default virt-install: remote guest creation

Cole Robinson wrote:

I've taken a stab at getting remote guest creation up and running
for virt-install. Most of the existing code translates well to the
remote case, but the main issue is storage: how does the user tell
us where to create and find existing storage/media, and how can we
usefully validate this info. The libvirt storage API is the lower
level mechanism that allows this fun stuff to happen, its really
just a matter of choosing a sane interface for it all.

The two interface problems we have are:

- Changes to VirtualDisk to handle storage apis
- Changes to virt-install cli to allow specifying storage info

For VirtualDisk, I added two options
- volobj : a libvirt virStorageVol instance
- volinstall : a virtinst StorageVolume instance



Do you have examples of what this might look like for VirtualDisk? I'm
interested in teaching koan how to install on remote hosts.



If the user wants the VirtualDisk to use existing storage, they
will need to query libvirt for the virStorageVol and pass this
to the VirtualDisk, which will take care of the rest.


Basically the use cases I care about are:

Install to a specific path and/or filename
Install to an existing partition
Install to a new partition in an existing LVM volume group.

As koan needed to do this before the storage stuff (IIRC) I have code in
koan to manage LVM. I'll need to keep it around for support of RHEL
5.older and F8-previous, so if the new stuff works relatively the same
that would be great.


Basically if I can pass in a path or LVM volume group name, I'm happy.
Needing to grok any XML would make me unhappy



If the user wants to create a new managed volume, they populate
a StorageVolume object (posted earlier but not yet committed)
and pass this VirtualDisk. VirtualDisk will map the setup and
is_size_conflict commands as appropriate, so all current
infrastructure will just work.

Now there wasn't a lot of functionality that needed to be added
to accomplish this, but VirtualDisk was becoming unmaintainable
so I took this opportunity to break it out into it's own file
and clean it up quite a bit. I also added a parent VirtualDevice
class which I will move all the other device classes over to in
the future, but it's not an immediate priority.

The other choices here are to offload looking up storage volumes
to VirtualDisk, or maybe add an option to attempt to lookup
the 'path' parameter as a storage volume if we are on a remote
connection. These could just be options added later though.


The next piece is how the interface changes for virt-install.
Here are the storage use cases we now have:

1) use existing non-managed (local) disk
- signified by --file /some/real/path

2) create non-managed (local) disk
- signified by --file /some/real/dir/idontexist



What is "managed vs unmanaged" here?


3) create managed disk
- all we would really need is the pool name to install on.



What's a pool in context here? I'm basically trying to make sure
this is usable without requiring ovirt.



4) use existing managed disk
- could be a pool name, vol name combo, or perhaps
even an absolute path representing a volume.


a volume group name is good.

A path to a volume group is ok too as we know that just lives under
/dev/mapper and is easy to get to.




5) use existing non-managed media (cdrom)
- signified by --cdrom /some/real/path

6) use existing managed media
- same syntax as existing managed disk

The options I see are:

A) overload existing options (--file, --cdrom): we can detect we
are using a remote connection, and try to lookup a passed path
as a volume. if the user wants to specify pool/vol by name, we
can use something like 'poolname:volname' for some reasonable
delimiter. however this could collide with some legitimate
storage names so it may not feasible. We could always get

fancy and allow escaping characters though.

IMHO, autodetection would be very nice if it worked just like it was a
path on the remote system and the only difference was that you were
using a different connection string. This makes remote work as close
to local as possible. If that's not doable that's ok.



B) Add extra options. To completely get away without having to
add some 'poolname:volname' type format as above, we would
probably need a --pool-name and --vol-name, and someway to
indicate that we want these for cdrom media as well :/



Again, I'm not sure what a "pool" here in context of this library.
LVMs, mount points, I get


Hope that helps?

--Michael



I've currently been testing this with a hacked up version of A.

The only remaining issue is some trouble with Guest objects
expecting the install location to be local, but I'm still
playing with that.

Attached are the new VirtualDisk and VirtualDevice files, this
all works and passes validation testing but I haven't given it
any real polish yet.

Any feedback is appreciated.

Thanks,
Cole



------------------------------------------------------------------------


_______________________________________________
et-mgmt-tools mailing list
et-mgmt-tools@redhat.com
https://www.redhat.com/mailman/listinfo/et-mgmt-tools


_______________________________________________
et-mgmt-tools mailing list
et-mgmt-tools@redhat.com
https://www.redhat.com/mailman/listinfo/et-mgmt-tools
 
Old 07-30-2008, 09:53 PM
Cole Robinson
 
Default virt-install: remote guest creation

Michael DeHaan wrote:
> Cole Robinson wrote:
>> I've taken a stab at getting remote guest creation up and running
>> for virt-install. Most of the existing code translates well to the
>> remote case, but the main issue is storage: how does the user tell
>> us where to create and find existing storage/media, and how can we
>> usefully validate this info. The libvirt storage API is the lower
>> level mechanism that allows this fun stuff to happen, its really
>> just a matter of choosing a sane interface for it all.
>>
>> The two interface problems we have are:
>>
>> - Changes to VirtualDisk to handle storage apis
>> - Changes to virt-install cli to allow specifying storage info
>>
>> For VirtualDisk, I added two options
>> - volobj : a libvirt virStorageVol instance
>> - volinstall : a virtinst StorageVolume instance
>>
>
> Do you have examples of what this might look like for VirtualDisk? I'm
> interested in teaching koan how to install on remote hosts.

I've attached a pretty ugly script I was using just to basically test
this stuff at first. It has hardcoded values specific to my machine
so it won't work if you run it. However it has an example that covers
both of the above cases.

Please read my below comments though regarding the libvirt storage
apis.

>
>> If the user wants the VirtualDisk to use existing storage, they
>> will need to query libvirt for the virStorageVol and pass this
>> to the VirtualDisk, which will take care of the rest.
>>
> Basically the use cases I care about are:
>
> Install to a specific path and/or filename
> Install to an existing partition
> Install to a new partition in an existing LVM volume group.
>
> As koan needed to do this before the storage stuff (IIRC) I have code in
> koan to manage LVM. I'll need to keep it around for support of RHEL
> 5.older and F8-previous, so if the new stuff works relatively the same
> that would be great.
>
> Basically if I can pass in a path or LVM volume group name, I'm happy.
> Needing to grok any XML would make me unhappy

There won't be any need to mess with xml here.

<snip>

>>
>> The next piece is how the interface changes for virt-install.
>> Here are the storage use cases we now have:
>>
>> 1) use existing non-managed (local) disk
>> - signified by --file /some/real/path
>>
>> 2) create non-managed (local) disk
>> - signified by --file /some/real/dir/idontexist
>>
>
> What is "managed vs unmanaged" here?

Managed = Libvirt storage APIs. The libvirt storage APIs are how
we know what exists on remote systems, and how we tell remote
systems to create this file with this format, or that partition
with that size, etc.

The 'pool' and 'volume' terminology is all part of this.

http://libvirt.org/storage.html

The gist of it is:

A 'pool' is some resource that can be carved up into units to be
used directly by VMs. Pool types are a directory, nfs mount,
filesystem mount (all carved into flat files), lvm volgroup,
raw disk devices (carved into smaller blk devs), and iscsi
(which creation isn't supported on).

A 'volume' is the carved up unit, directly usable as storage for
a VM.

All this remote guest creation stuff won't 'just work' if the user
passes the correct parameters, the remote host will have to be
configured in advance to teach libvirt about what storage is
available. This could either be done on the command line using
virsh pool-create-as, or use virt-manager and use wizards to
do all this fun stuff (not posted yet. 95% completed and
working, just hasn't been polished up, and it's dependent on
some not committed virtinst work).

We should probably have libvirt set up a default storage
pool for /var/lib/libvirt/images so that there would be
a typical out of the box option for users.

- Cole


import virtinst
from virtinst import VirtualDisk as vd
from virtinst.Storage import StoragePool as sp
from virtinst.Storage import StorageVolume as sv

import logging
import sys
import libvirt

# Set debug logging to print
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
streamHandler = logging.StreamHandler(sys.stderr)
streamHandler.setLevel(logging.DEBUG)
root_logger.addHandler(streamHandler)

LOCAL_CONN = "qemu:///system"
REMOTE_CONN = "qemu+ssh://localhost/system"

POOL = "default"
VOL = "test.img"
GUESTNAME="testguest"

GOODSIZE=5*1024*1024*1024
BADSIZE=10000*1024*1024*1024

print "open conn"
localconn = libvirt.open(LOCAL_CONN)
print "get pool"
pool = localconn.storagePoolLookupByName(POOL)
print "get vol"
vol = pool.storageVolLookupByName(VOL)

print "get pooltype"
pooltype = virtinst.util.get_xml_path(pool.XMLDesc(0), "/pool/@type")
print "get volclass"
volclass = sp.get_volume_for_pool(pooltype)

print "create volclass instance"
volinst = volclass(name="testguest", pool=pool, capacity=GOODSIZE)

def check_disk(disk):

print "
is_conflict_disk:"
print d.is_conflict_disk(localconn)

print "
is_size_conflict:"
print d.is_size_conflict()

print "
get_xml_config()"
print d.get_xml_config("hda")
print "
"

print "

Creating volobj disk:"
d = vd(volobj=vol)
check_disk(d)

print "

Creating volinst disk:"
d = vd(volinstall=volinst)
check_disk(d)
_______________________________________________
et-mgmt-tools mailing list
et-mgmt-tools@redhat.com
https://www.redhat.com/mailman/listinfo/et-mgmt-tools
 
Old 07-30-2008, 10:00 PM
Michael DeHaan
 
Default virt-install: remote guest creation

Cole Robinson wrote:

Michael DeHaan wrote:


Cole Robinson wrote:


I've taken a stab at getting remote guest creation up and running
for virt-install. Most of the existing code translates well to the
remote case, but the main issue is storage: how does the user tell
us where to create and find existing storage/media, and how can we
usefully validate this info. The libvirt storage API is the lower
level mechanism that allows this fun stuff to happen, its really
just a matter of choosing a sane interface for it all.

The two interface problems we have are:

- Changes to VirtualDisk to handle storage apis
- Changes to virt-install cli to allow specifying storage info

For VirtualDisk, I added two options
- volobj : a libvirt virStorageVol instance
- volinstall : a virtinst StorageVolume instance


Do you have examples of what this might look like for VirtualDisk? I'm
interested in teaching koan how to install on remote hosts.



I've attached a pretty ugly script I was using just to basically test
this stuff at first. It has hardcoded values specific to my machine
so it won't work if you run it. However it has an example that covers
both of the above cases.

Please read my below comments though regarding the libvirt storage
apis.




If the user wants the VirtualDisk to use existing storage, they
will need to query libvirt for the virStorageVol and pass this
to the VirtualDisk, which will take care of the rest.



Basically the use cases I care about are:

Install to a specific path and/or filename
Install to an existing partition
Install to a new partition in an existing LVM volume group.

As koan needed to do this before the storage stuff (IIRC) I have code in
koan to manage LVM. I'll need to keep it around for support of RHEL
5.older and F8-previous, so if the new stuff works relatively the same
that would be great.


Basically if I can pass in a path or LVM volume group name, I'm happy.
Needing to grok any XML would make me unhappy



There won't be any need to mess with xml here.

<snip>



Excellent!

The next piece is how the interface changes for virt-install.
Here are the storage use cases we now have:

1) use existing non-managed (local) disk
- signified by --file /some/real/path

2) create non-managed (local) disk
- signified by --file /some/real/dir/idontexist



What is "managed vs unmanaged" here?



Managed = Libvirt storage APIs. The libvirt storage APIs are how
we know what exists on remote systems, and how we tell remote
systems to create this file with this format, or that partition
with that size, etc.

The 'pool' and 'volume' terminology is all part of this.

http://libvirt.org/storage.html

The gist of it is:

A 'pool' is some resource that can be carved up into units to be
used directly by VMs. Pool types are a directory, nfs mount,
filesystem mount (all carved into flat files), lvm volgroup,

raw disk devices (carved into smaller blk devs), and iscsi
(which creation isn't supported on).

A 'volume' is the carved up unit, directly usable as storage for
a VM.

All this remote guest creation stuff won't 'just work' if the user
passes the correct parameters, the remote host will have to be
configured in advance to teach libvirt about what storage is
available. This could either be done on the command line using
virsh pool-create-as, or use virt-manager and use wizards to
do all this fun stuff (not posted yet. 95% completed and
working, just hasn't been polished up, and it's dependent on
some not committed virtinst work).



Here's what I think would be an interesting use case: I'm looking to
teach koan to do this fully-remotely. The idea is that you have a
provisioning server and you want to be able to remotely declare what you
want to have it it can help make it come into reality.

So in the WebUI for "cobbler system add", I have a button now that says
"Save". I want to be able to have another button that says "Save &
Create On Host" and you could type in what host. Obviously, the
command line for cobbler is where most people would be using it, but
that's another example of how this could be seen to work.



We should probably have libvirt set up a default storage
pool for /var/lib/libvirt/images so that there would be
a typical out of the box option for users.



That would be very good.

If I can do the pool-add stuff remotely via libvirt-remote that would
also be nice, though if I have to do it over SSH that is just the same
as we'd be using the SSH version of libvirt-remote most likely anyway.


I'm pretty sure that either way we can get it to work for a heavily
scriptable remote-deploy API kind of solution. Neat.



- Cole





_______________________________________________
et-mgmt-tools mailing list
et-mgmt-tools@redhat.com
https://www.redhat.com/mailman/listinfo/et-mgmt-tools
 

Thread Tools




All times are GMT. The time now is 07:10 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org