blob: 0e463b96619a81051238bc4e8a18fac9bba12918 [file] [log] [blame]
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information#
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
About Working with Instances
============================
CloudStack provides administrators with complete control over the
lifecycle of all guest Instances executing in the cloud. CloudStack provides
several guest management operations for end Users and administrators.
Instances may be stopped, started, rebooted, and destroyed.
Guest Instances have a name and group. Instance names and groups are opaque to
CloudStack and are available for end Users to organize their Instances. Each
Instance can have three names for use in different contexts. Only two of these
names can be controlled by the User:
- Instance name – a unique, immutable ID that is generated by
CloudStack and can not be modified by the User. This name conforms to
the requirements in IETF RFC 1123.
- Display name – the name displayed in the CloudStack web UI. Can be
set by the User. Defaults to Instance name.
- Name – host name that the DHCP server assigns to the Instance. Can be set
by the User. Defaults to Instance name
.. note::
You can append the display name of a guest Instance to its internal name.
For more information, see `“Appending a Name to the Guest Instance’s
Internal Name” <#appending-a-name-to-the-guest-vms-internal-name>`_.
Guest Instances can be configured to be Highly Available (HA). An HA-enabled
Instance is monitored by the system. If the system detects that the Instance is
down, it will attempt to restart the Instance, possibly on a different host.
For more information, see HA-Enabled Instances on
Each new Instance is allocated one public IP address. When the Instance is started,
CloudStack automatically creates a static NAT between this public IP
address and the private IP address of the Instance.
If elastic IP is in use (with the NetScaler load balancer), the IP
address initially allocated to the new Instance is not marked as elastic. The
User must replace the automatically configured IP with a specifically
acquired elastic IP, and set up the static NAT mapping between this new
IP and the guest Instance’s private IP. The Instance’s original IP address is then
released and returned to the pool of available public IPs. Optionally,
you can also decide not to allocate a public IP to an Instance in an
EIP-enabled Basic zone. For more information on Elastic IP, see
`“About Elastic IP” <networking/elastic_ips.html>`_.
CloudStack cannot distinguish a guest Instance that was shut down by the User
(such as with the “shutdown” command in Linux) from an Instance that shut down
unexpectedly. If an HA-enabled Instance is shut down from inside the Instance,
CloudStack will restart it. To shut down an HA-enabled Instance, you must go
through the CloudStack UI or API.
.. note::
**Monitor Instances for Max Capacity**
The CloudStack administrator should monitor the total number of Instances
in each cluster, and disable allocation to the cluster if the total
is approaching the maximum that the hypervisor can handle. Be sure
to leave a safety margin to allow for the possibility of one or more
hosts failing, which would increase the Instance load on the other hosts as
the Instances are automatically redeployed. Consult the documentation for your
chosen hypervisor to find the maximum permitted number of Instances per host,
then use CloudStack global configuration settings to set this as the
default limit. Monitor the Instance activity in each cluster at all times.
Keep the total number of Instances below a safe level that allows for the
occasional host failure. For example, if there are N hosts in the
cluster, and you want to allow for one host in the cluster to be down at
any given time, the total number of Instances you can permit in the
cluster is at most (N-1) \* (per-host-limit). Once a cluster reaches
this number of Instances, use the CloudStack UI to disable allocation of more
Instances to the cluster.
Instance Lifecycle
==================
Instances can be in the following states:
- Created
- Running
- Stopped
- Destroyed
- Expunged
With the intermediate states of
- Creating
- Starting
- Stopping
- Expunging
Creating Instances
------------------
Instance are usually created from a Template. Users can also
create blank Instances. A blank Instance is a virtual
machine without an OS Template. Users can attach an ISO file and install
the OS from the CD/DVD-ROM.
.. note::
You can create an Instance without starting it. You can determine whether the
Instance needs to be started as part of the Instance deployment. A request parameter,
startVM, in the deployVm API provides this feature. For more information,
see the Developer's Guide.
To create an Instance from a Template:
#. Log in to the CloudStack UI as an administrator or User.
#. In the left navigation bar, click Compute -> Instances.
#. Click the Add Instance button.
#. Select a zone. Admin Users will have the option to select a pod, cluster or host.
#. Select a Template or ISO. For more information about how the Templates came
to be in this list, see `*Working with Templates* <templates.html>`_.
#. Select a service offering. Be sure that the hardware you have allows starting the selected
service offering. If the selected template has a tag associated with it
then only supported service offerings will be available for the selection.
#. Select a disk offering.
#. Select/Add a Network.
.. note::
VMware only: If the selected Template contains OVF properties, different deployment options or configurations,
multiple NICs or end-user license agreements, then the wizard will display these properties.
See `“Support for Virtual Appliances” <virtual_machines.html#support-for-virtual-appliances>`_.
#. Click Launch Instance and your Instance will be created and started.
.. note::
For security reason, the internal name of the Instance is visible
only to the root admin.
.. note::
**XenServer**
Windows Instances running on XenServer require PV drivers,
which may be provided in the Template or added after the Instance is
created. The PV drivers are necessary for essential management
functions such as mounting additional volumes and ISO images,
live migration, and graceful shutdown.
**VMware**
If the rootDiskController and dataDiskController are not specified for an Instance using Instance details and
these are set to use osdefault in the Template or the global configuration, then CloudStack tries to find the
recommended disk controllers for it using guest OS from the hypervisor. In some specific cases, it may create
issues with the Instance deployment or start operation. To overcome this, a specific disk controller can be
specified at the Instance or Template level. For an existing Instance its settings can be updated while it is in
stopped state by admin.
Install Required Tools and Drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Be sure the following are installed on each Instance:
- For XenServer, install PV drivers and Xen tools on each Instance. This will
enable live migration and clean guest shutdown. Xen tools are
required in order for dynamic CPU and RAM scaling to work.
- For vSphere, install VMware Tools on each Instance. This will enable
console view to work properly. VMware Tools are required in order for
dynamic CPU and RAM scaling to work.
To be sure that Xen tools or VMware Tools is installed, use one of the
following techniques:
- Create each Instance from a Template that already has the tools installed;
or,
- When registering a new Template, the Administrator or User can
indicate whether tools are installed on the Template. This can be
done through the UI or using the updateTemplate API; or,
- If a User deploys an Instance with a Template that does not
have Xen tools or VMware Tools, and later installs the tools on the
Instance, then the User can inform CloudStack using the
updateVirtualMachine API. After installing the tools and updating the
Instance, stop and start the Instance.
Instance Metdata
~~~~~~~~~~~~~~~~
CloudStack provides different means for controlling an instance's metadata.
- 'extraconfig' parameter of 'deployVirtualMachine' or 'updateVirtualMachine' API methods
can be used for setting different metadata parameters for an instance.
- Zone-level configurations - 'vm.metadata.manufacturer' and 'vm.metadata.product' can be used
to set the manufacturer and product respectively in the instance metadata. However, a
custom value for these parameters may affect cloud-init functionality for the instance
when used with CloudStack datasource. One of the requirement for cloud-init functionality
to work with CloudStack datasource is that product value should contain 'CloudStack'.
Accessing Instances
-------------------
Any User can access their own Instances. The administrator can
access all Instances running in the cloud.
To access an Instance through the CloudStack UI:
#. Log in to the CloudStack UI as a User or admin.
#. Click Compute -> Instances, then click the name of a running Instance.
#. Click the View Console button |console-icon.png|.
To access an Instance directly over the Network:
#. The Instance must have some port open to incoming traffic. For example, in
a basic zone, a new Instance might be assigned to a security group which
allows incoming traffic. This depends on what security group you
picked when creating the Instance. In other cases, you can open a port by
setting up a port forwarding policy. See `“IP
Forwarding and Firewalling” <advanced_zone_config.html#ip-forwarding-and-firewalling>`_.
#. If a port is open but you can not access the Instance using ssh, it’s
possible that ssh is not already enabled on the Instance. This will depend
on whether ssh is enabled in the Template you picked when creating
the Instance. Access the Instance through the CloudStack UI and enable ssh on the
machine using the commands for the Instance’s operating system.
#. If the Network has an external firewall device, you will need to
create a firewall rule to allow access. See `“IP
Forwarding and Firewalling” <advanced_zone_config.html#ip-forwarding-and-firewalling>`_.
Securing Instance Console Access (KVM only)
-------------------------------------------
CloudStack provides a way to secure VNC console access on KVM using the CA Framework certificates to enable TLS on VNC on each KVM host.
To enable TLS on a KVM host, navigate to the host and click on: Provision Host Security Keys (or invoke the provisionCertificate API for the host):
- When a new host is added and it is provisioned with a certificate, TLS will also be enabled for VNC
- The running Instances on a secured host will continue to be VNC unencrypted unless they are stopped and started.
- New Instances created on a secured host will be VNC encrypted.
Once the administrator concludes the certificates provisioning on Cloudstack, the console access for new Instances on the hosts will be encrypted. CloudStack displays the console of the Instances through the noVNC viewer embedded in the console proxy System VMs.
The CloudStack Users will notice the encrypted VNC sessions display a green bar stating the session is encrypted as in the image below. Also, the tab title includes ‘(TLS backend)’ when the session is encrypted.
.. note::
CloudStack will give access to the certificates to the group defined on the /etc/libvirt/qemu.conf file (or the last one defined on the file in case of multiple lines setting a group).
Stopping and Starting Instance
-------------------------------
Once an Instance is created, you can stop, restart, or delete it as
needed. In the CloudStack UI, click Instances, select the Instance, and use
the Stop, Start, Reboot, and Destroy buttons.
A stop will attempt to gracefully shut down the operating system, via
an ACPI 'stop' command which is similar to pressing the soft power switch
on a physical server. If the operating system cannot be stopped, it will
be forcefully terminated. This has the same effect as pulling out the power
cord from a physical machine.
A reboot should not be considered as a stop followed by a start. In CloudStack,
a start command reconfigures the Instance to the stored parameters in
CloudStack's database. The reboot process does not do this.
When starting an Instance, admin Users have the option to specify a pod, cluster, or host.
.. note::
When starting an instance, it's possible to specify a host for deployment,
even if the host's tags don't match the instance's tags. This can lead to a
mismatch between the VM's tags and the host's tags, which may not be
desirable.
To avoid this, refer to the :ref:`strict-host-tags` section
Deleting Instance
------------------
Users can delete their own Instance. A running Instance will be abruptly stopped
before it is deleted. Administrators can delete any Instance.
To delete an Instance:
#. Log in to the CloudStack UI as a User or admin.
#. In the left navigation, click Compute -> Instances.
#. Choose the Instance that you want to delete.
#. Click the Destroy Instance button. |Destroyinstance.png|
#. Optionally both expunging and the deletion of any attached volumes can be enabled.
When an Instance is **destroyed**, it can no longer be seen by the end User,
however, it can be seen (and recovered) by a root admin. In this state it still
consumes logical resources. Global settings control the maximum time from an Instance
being destroyed, to the physical disks being removed. When the Instance and its rooot disk
have been deleted, the Instance is said to have been expunged.
Once an Instance is **expunged**, it cannot be recovered. All the
resources used by the Instance will be reclaimed by the system,
This includes the Instance’s IP address.
Managing Instances
==================
Scheduling operations on an Instance
-------------------------------------
After an Instance is created, you can schedule Instance lifecycle operations using cron expressions. The operations that can be scheduled are:
- Start
- Stop
- Reboot
- Force Stop
- Force Reboot
To schedule an operation on an Instance through the UI:
#. Log in to the CloudStack UI as a User or admin.
#. In the left navigation, click Instances.
#. Click the Instance that you want to schedule the operation on.
#. On the Instance details page, click the **Schedule** button. |vm-schedule-tab.png|
#. Click on **Add schedule** button to add a new schedule or click on Edit button |EditButton.png| to edit
an existing schedule. |vm-schedule-form.png|
#. Configure the schedule as per requirements:
- **Description**: Enter a description for the schedule. If left empty, it's generated on the basis of action and the schedule.
- **Action**: Select the action to be triggered by the schedule. Can't be changed once the schedule has been created.
- **Schedule**: Select the frequency using cron format at which the action should be triggered.
For example, `* * * * *` will trigger the job every minute.
- **Timezone**: Select the timezone in which the schedule should be triggered.
- **Start Date**: Date at the specified time zone after which the schedule becomes active.
Defaults to current timestamp plus 1 minute.
- **End Date**: Date at the specified time zone before which the schedule is active.
If not set, schedule won't become inactive.
.. note::
It's not possible to remove the end date once it's configured.
#. Click OK to save the schedule.
.. note::
If multiple schedules are configured for an Instance and the scheduled time coincides, then only the schedule which was created first
will be executed and the rest will be skipped.
Changing the Instance Name, OS, or Group
----------------------------------------
After an Instance is created, you can modify the display name, operating
system, and the group it belongs to.
To access an Instance through the CloudStack UI:
#. Log in to the CloudStack UI as a User or admin.
#. In the left navigation, click Instances.
#. Select the Instance that you want to modify.
#. Click the Stop button to stop the Instance. |StopButton.png|
#. Click Edit. |EditButton.png|
#. Make the desired changes to the following:
#. **Display name**: Enter a new display name if you want to change the
name of the Instance.
#. **OS Type**: Select the desired operating system.
#. **Group**: Enter the group name for the Instance.
#. Click Apply.
Appending a Name to the Guest Instance’s Internal Name
------------------------------------------------------
Every guest Instance has an internal name. The host uses the internal name to identify the guest Instances. CloudStack gives you an option to provide a guest Instance with a name. You can set this name as the internal name so that the vCenter can use it to identify the guest Instance. A new global parameter, vm.instancename.flag, has now been added to achieve this functionality.
The default format of the internal name is i-<account\_id>-<vm\_id>-<i.n>, where i.n is the value of the global configuration - instance.name. However, If vm.instancename.flag is set to true, and if a name is provided during the creation of a guest Instance, the name is appended to the internal name of the guest Instance on the host. This makes the internal name format as i-<account\_id>-<vm\_id>-<name>. The default value of vm.instancename.flag is set to false. This feature is intended to make the correlation between Instance names and internal names easier in large data center deployments.
The following table explains how an Instance name is displayed in different scenarios.
.. cssclass:: table-striped table-bordered table-hover
======================== =============================== ============================== ============================== ===========================
**User-Provided Name** Yes No Yes No
**vm.instancename.flag** True True False False
**Name** <Name> <i.n>-<UUID> <Name> <i.n>-<UUID>
**Display Name** <Display name> <i.n>-<UUID> <Display name> <i.n>-<UUID>
**Hostname on the VM** <Name> <i.n>-<UUID> <Name> <i.n>-<UUID>
**Name on vCenter** i-<account\_id>-<vm\_id>-<Name> <i.n>-<UUID> i-<account\_id>-<vm\_id>-<i.n> i-<account\_id>-<vm\_id>-<i.n>
**Internal Name** i-<account\_id>-<vm\_id>-<Name> i-<account\_id>-<vm\_id>-<i.n> i-<account\_id>-<vm\_id>-<i.n> i-<account\_id>-<vm\_id>-<i.n>
======================== =============================== ============================== ============================== ===========================
.. note::
<i.n> represents the value of the global configuration - instance.name
Instance delete protection
--------------------------
CloudStack protects instances from accidental deletion using a delete protection
flag, which is false by default. When delete protection is enabled for an
instance, it cannot be deleted through the UI or API. It can only be deleted
after removing delete protection from the instance.
Delete protection can be enabled for an instance via updateVirtualMachine API.
.. code:: bash
cmk update virtualmachine id=<instance id> deleteprotection=true
To remove delete protection, use the following command:
.. code:: bash
cmk update virtualmachine id=<instance id> deleteprotection=false
To enable/disable delete protection for an instance using the UI, follow these steps:
#. Log in to the CloudStack UI as a User or admin.
#. In the navigation menu on the left, click Instances under Compute.
#. Choose the Instance for which you want to enable/disable delete protection.
#. Click on the Edit button |EditButton.png|
#. Toggle the Delete Protection switch to enable or disable delete protection.
#. Click Ok button to save the changes.
.. note::
The instance delete protection is only considered when the instance is being
deleted through the UI or via `destroyVirtualMachine` or `expungeVirtualMachine`
API. If the domain/project is deleted, the instances under the domain/project
will be deleted irrespective of the delete protection status.
Changing the Service Offering for an Instance
---------------------------------------------
To upgrade or downgrade the level of compute resources available to an
Instance, you can change the Instance's compute offering.
#. Log in to the CloudStack UI as a User or admin.
#. In the left navigation, click Instances.
#. Choose the Instance that you want to work with.
#. (Skip this step if you have enabled dynamic Instance scaling; see
:ref:`cpu-and-memory-scaling`.)
Click the Stop button to stop the Instance. |StopButton.png|
#. Click the Change Service button. |ChangeServiceButton.png|
The Change service dialog box is displayed.
#. Select the offering you want to apply to the selected Instance.
#. Click OK.
.. note::
When changing the service offering for an instance, it's possible to have a
mismatch of host tags which can be problematic.
For more information on how to prevent this, see :ref:`strict-host-tags`.
.. _cpu-and-memory-scaling:
CPU and Memory Scaling for Running Instances
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(Supported on VMware and XenServer)
It is not always possible to accurately predict the CPU and RAM
requirements when you first deploy an Instance. You might need to increase
these resources at any time during the life of an Instance. You can dynamically
modify CPU and RAM levels to scale up these resources for a running Instance
without incurring any downtime.
Dynamic CPU and RAM scaling can be used in the following cases:
- User Instances on hosts running VMware and XenServer.
- System VMs on VMware.
- VMware Tools or XenServer Tools must be installed on the virtual
machine.
- The new requested CPU and RAM values must be within the constraints
allowed by the hypervisor and the Instance operating system.
- New Instances that are created after the installation of CloudStack 4.2 can
use the dynamic scaling feature. If you are upgrading from a previous
version of CloudStack, your existing Instances created with previous
versions will not have the dynamic scaling capability unless you
update them using the following procedure.
Enable Dynamic Scaling for Existing Instances
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are upgrading from a previous version of CloudStack, and you want
your existing Instances created with previous versions to have the dynamic
scaling capability, update the Instances using the following steps:
#. Make sure the zone-level setting enable.dynamic.scale.vm is set to
true. In the left navigation bar of the CloudStack UI, click
Infrastructure, then click Zones, click the zone you want, and click
the Settings tab.
#. Install Xen tools (for XenServer hosts) or VMware Tools (for VMware
hosts) on each Instance if they are not already installed.
#. Stop the Instance.
#. Click the Edit button.
#. Click the Dynamically Scalable checkbox.
#. Click Apply.
#. Restart the Instance.
Configuring Dynamic CPU and RAM Scaling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To configure this feature, use the following new global configuration
variables:
- enable.dynamic.scale.vm: Set to True to enable the feature. By
default, the feature is turned off.
- scale.retry: How many times to attempt the scaling operation. Default
= 2.
Along with these global configurations, the following options need to be enabled
to make an Instance dynamically scalable
- Template from which Instance is created needs to have Xen tools (for XenServer hosts)
or VMware Tools (for VMware hosts) and it should have 'Dynamically Scalable'
flag set to true.
- Service Offering of the Instance should have 'Dynamic Scaling Enabled' flag set to true.
By default, this flag is true when a Service Offering is created.
- While deploying an Instance, User or Admin needs to mark 'Dynamic Scaling Enabled' to true.
By default this flag is set to true.
If any of the above settings are false then the Instance cannot be configured as dynamically scalable.
How to Dynamically Scale CPU and RAM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To modify the CPU and/or RAM capacity of an Instance, you need to
change the compute offering of the Instance to a new compute offering that has
the desired CPU value and RAM value and 'Dynamic Scaling Enabled' flag as true.
You can use the same steps described above in `“Changing the Service Offering for an
Instance” <#changing-the-service-offering-for-a-vm>`_, but skip the step where you
stop the Instance. Of course, you might have to create a new
compute offering first.
When you submit a dynamic scaling request, the resources will be scaled
up on the current host if possible. If the host does not have enough
resources, the Instance will be live migrated to another host in the same
cluster. If there is no host in the cluster that can fulfill the
requested level of CPU and RAM, the scaling operation will fail. The Instance
will continue to run as it was before.
Limitations
~~~~~~~~~~~
- You can not do dynamic scaling for system Instances on XenServer.
- CloudStack will not check to be sure that the new CPU and RAM levels
are compatible with the OS running on the Instance.
- When scaling memory or CPU for a Linux Instance on VMware, you might need
to run scripts in addition to the other steps mentioned above. For
more information, see `Hot adding memory in Linux
(1012764) <http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1012764>`_
in the VMware Knowledge Base.
- (VMware) If resources are not available on the current host, scaling
up will fail on VMware because of a known issue where CloudStack and
vCenter calculate the available capacity differently. For more
information, see
`https://issues.apache.org/jira/browse/CLOUDSTACK-1809 <https://issues.apache.org/jira/browse/CLOUDSTACK-1809>`_.
- On Instances running Linux 64-bit and Windows 7 32-bit operating systems,
if the Instance is initially assigned a RAM of less than 3 GB, it can be
dynamically scaled up to 3 GB, but not more. This is due to a known
issue with these operating systems, which will freeze if an attempt
is made to dynamically scale from less than 3 GB to more than 3 GB.
- On KVM, not all versions of Qemu/KVM may support dynamic scaling. Some combinations may result CPU or memory related failures during Instance deployment.
Resetting the Instance Root Volume on Reboot
--------------------------------------------
For secure environments, and to ensure that Instance state is not persisted
across reboots, you can reset the root disk. For more information, see
`“Reset Instance to New Root Disk on
Reboot” <storage.html#reset-vm-to-new-root-disk-on-reboot>`_.
Moving Instances Between Hosts (Manual Live Migration)
------------------------------------------------------
The CloudStack administrator can move a running Instance from one host to
another without interrupting service to Users or going into maintenance
mode. This is called manual live migration, and can be done under the
following conditions:
- The root administrator is logged in. Domain admins and Users can not
perform manual live migration of Instances.
- The Instance is running. Stopped Instances can not be live migrated.
- The destination host must have enough available capacity. If not, the
Instance will remain in the "migrating" state until memory becomes
available.
- (KVM) The Instance must not be using local disk storage. (On XenServer and
VMware, Instance live migration with local disk is enabled by CloudStack
support for XenMotion and vMotion.)
- (KVM) The destination host must be in the same cluster as the
original host. (On XenServer and VMware, Instance live migration from one
cluster to another is enabled by CloudStack support for XenMotion and
vMotion.)
To manually live migrate an Instance
#. Log in to the CloudStack UI as root administrator.
#. In the left navigation, click Instances.
#. Choose the Instance that you want to migrate.
#. Click the Migrate Instance button. |Migrateinstance.png|
#. From the list of suitable hosts, choose the one to which you want to
move the Instance.
.. note::
If the Instance's storage has to be migrated along with the Instance, this will
be noted in the host list. CloudStack will take care of the storage
migration for you.
#. Click OK.
.. note::
(KVM) If the Instance's storage has to be migrated along with the Instance, from a mounted NFS storage pool to a cluster-wide mounted NFS storage pool, then the 'migrateVirtualMachineWithVolume' API has to be used. There is no UI integration for this feature.
(CloudMonkey) > migrate virtualmachinewithvolume virtualmachineid=<virtual machine uuid> hostid=<destination host uuid> migrateto[i].volume=<virtual machine volume number i uuid> migrateto[i].pool=<destination storage pool uuid for volume number i>
where i in [0,..,N] and N = number of volumes of the Instance
.. note::
During live migration, there can be a mismatch between the instance's tags
with the destination host's tags which might be undesirable.
For more details on how to prevent this, see :ref:`strict-host-tags`.
Moving Instance's Volumes Between Storage Pools (offline volume Migration)
--------------------------------------------------------------------------
The CloudStack administrator can move a stopped Instance's volumes from one
storage pool to another within the cluster. This is called offline volume
migration, and can be done under the following conditions:
- The root administrator is logged in. Domain admins and Users can not
perform offline volume migration of Instances.
- The Instance is stopped.
- The destination storage pool must have enough available capacity.
- UI operation allows only migrating the root volume upon selecting the
storage pool. To migrate all volumes to the desired storage pools
the 'migrateVirtualMachineWithVolume' API has to be used by providing
'migrateto' map parameter.
To perform stopped Instance's volumes migration
#. Log in to the CloudStack UI as root administrator.
#. In the left navigation, click Instances.
#. Choose the Instance that you want to migrate.
#. Click the Migrate Instance button. |Migrateinstance.png|
#. From the list of suitable storage pools, choose the one to which you want to
move the Instance root volume.
#. Click OK.
Assigning Instances to Hosts
----------------------------
At any point in time, each Instance is running on a single
host. How does CloudStack determine which host to place an Instance on?
There are several ways:
- Automatic default host allocation. CloudStack can automatically pick
the most appropriate host to run each Instance.
- Instance type preferences. CloudStack administrators can specify that
certain hosts should have a preference for particular types of guest
Instances. For example, an administrator could state that a host
should have a preference to run Windows guests. The default host
allocator will attempt to place guests of that OS type on such hosts
first. If no such host is available, the allocator will place the
Instance wherever there is sufficient physical capacity.
- Vertical and horizontal allocation. Vertical allocation consumes all
the resources of a given host before allocating any guests on a
second host. This reduces power consumption in the cloud. Horizontal
allocation places a guest on each host in a round-robin fashion. This
may yield better performance to the guests in some cases.
- Admin Users preferences. Administrators have the option to specify a
pod, cluster, or host to run the Instance in. CloudStack will then select
a host within the given infrastructure.
- End User preferences. Users can not control exactly which host will
run a given Instance, but they can specify a zone for the Instance.
CloudStack is then restricted to allocating the Instance only to one of the
hosts in that zone.
- Host tags. The administrator can assign tags to hosts. These tags can
be used to specify which host an Instance should use. The CloudStack
administrator decides whether to define host tags, then create a
service offering using those tags and offer it to the User.
- Affinity groups. By defining affinity groups and assigning Instances to
them, the User or administrator can influence (but not dictate) whether
Instances should run on separate hosts or on the same host. This feature is to
let Users specify whether certain Instances will or will not be on the same host.
- CloudStack also provides a pluggable interface for adding new
allocators. These custom allocators can provide any policy the
administrator desires.
Affinity Groups
~~~~~~~~~~~~~~~
By defining affinity groups and assigning Instances to them, the User or
administrator can influence (but not dictate) which Instances should run on
either the same or separate hosts. This feature allows Users to specify
the affinity groups to which an Instance can belong. Instances with the
same “host anti-affinity” type won’t be on the same host, which serves to
increase fault tolerance. If a host fails, another Instance offering the same
service (for example, hosting the User's website) is still up and
running on another host.
It also allows Users to specify that Instances with the same "host affinity" type
must run on the same host, which can be useful in ensuring connectivity and low
latency between guest Instances.
"non-strict host anti-affinity" is similar to, but more flexible than, "host
anti-affinity". In that case Instances are deployed to different hosts as long as
there are enough hosts to satisfy the requirement, otherwise they might be
deployed to the same host.
"non-strict host affinity" is similar to, but more flexible than, "host affinity",
Instances are ideally placed together in the same host, but only if possible.
.. note:: When using VMware and enabling DRS, the results are
unpredictable. VMware implements similar functionality but
CloudStack does not leverage the VMware feature. As VMware is
unaware of the CloudStack definition of affinity groups, its DRS
may go against the desired configuration.
The scope of an affinity group is on an Account level.
Creating a New Affinity Group
'''''''''''''''''''''''''''''
To add an affinity group:
#. Log in to the CloudStack UI as an administrator or User.
#. In the left navigation bar, click Affinity Groups.
#. Click Add affinity group. In the dialog box, fill in the following
fields:
- Name. Give the group a name.
- Description. Any desired text to tell more about the purpose of
the group.
- Type. CloudStack supports four types of affinity groups. "host
anti-affinity", "host affinity", "non-strict host affinity" and
"non-strict host anti-affinity". "host anti-affinity" indicates
that the Instances in this group must not be placed on the same
host with each other. "host affinity" on the other hand indicates
that Instances in this group must be placed on the same host.
"non-strict host anti-affinity" indicates that Instances in this group
should be deployed to different hosts.
"non-strict host affinity" indicates that Instances in this group
should not be deployed to same hosts.
Assign a New Instance to an Affinity Group
''''''''''''''''''''''''''''''''''''''''''
To assign a new Instance to an affinity group:
- Create the Instance as usual, as described in `“Creating
Instances” <virtual_machines.html#creating-instances>`_. In the Add Instance
wizard, there is a new Affinity tab where you can select the
affinity group.
Change Affinity Group for an Existing Instance
''''''''''''''''''''''''''''''''''''''''''''''
To assign an existing Instance to an affinity group:
#. Log in to the CloudStack UI as an administrator or User.
#. In the left navigation bar, click Instances.
#. Click the name of the Instance you want to work with.
#. Stop the Instance by clicking the Stop button.
#. Click the Change Affinity button. |change-affinity-button.png|
View Members of an Affinity Group
'''''''''''''''''''''''''''''''''
To see which Instances are currently assigned to a particular affinity group:
#. In the left navigation bar, click Affinity Groups.
#. Click the name of the group you are interested in.
#. Click View Instances. The members of the group are listed.
From here, you can click the name of any Instance in the list to access all
its details and controls.
Delete an Affinity Group
''''''''''''''''''''''''
To delete an affinity group:
#. In the left navigation bar, click Affinity Groups.
#. Click the name of the group you are interested in.
#. Click Delete.
Any Instance that is a member of the affinity group will be disassociated
from the group. The former group members will continue to run
normally on the current hosts, but if the Instance is restarted, it will no
longer follow the host allocation rules from its former affinity
group.
Determine Destination Host of Instances with Non-Strict Affinity Groups
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
(Non-Strict Host Anti-Affinity and Non-Strict Host Affinity only)
The destination host of Instances with Non-Strict Affinity Groups are determined
by the host priorities. The hosts have default priority as 0. If there is a
Instance in the same Non-Strict Host Anti-Affinity group on the host, the host
priority will be decreased by 1. If there is an Instance in the same Non-Strict Host
Affinity group on the host, the host priority will be increased by 1. All
available hosts are reordered by host priorities when deploy or start an Instance.
Here are some examples how host priorities are calculated.
- Example 1: Instance has a non-strict host anti-affinity group.
If Host-1 has 2 Instances in the group, Host-2 has 3 Instances in the group.
Host-1 priority is -2, Host-2 priority is -3. If there are only 2 hosts,
Instance will be deployed to Host-1 as it has higher priority (-2 > -3).
- Example 2: Instance has a non-strict host affinity group.
If Host-1 has 2 Instances in the group, Host-2 has 3 Instances in the group.
Host-1 priority is 2, Host-2 priority is 3. If there are only 2 hosts,
Instance will be deployed to Host-2 (3 >2).
- Example 3: Instance has a non-strict host affinity group and also a non-strict host anti-affinity group.
If Host-1 has 2 Instances in the non-strict host affinity group, and
3 Instances in the non-strict host anti-affinity group. Host-1 priority is
calculated by:
0 (default) + 2 (Instances in non-strict host affinity group) - 3 (Instances in the non-strict host anti-affinity group) = -1
Changing an Instance's Base Image
---------------------------------
Every Instance is created from a base image, which is a Template or ISO which
has been created and stored in CloudStack. Both cloud administrators and
end Users can create and modify Templates, ISOs, and Instances.
In CloudStack, you can change an existing Instance's base image from one
Template to another, or from one ISO to another. (You can not change
from an ISO to a Template, or from a Template to an ISO).
For example, suppose there is a Template based on a particular operating
system, and the OS vendor releases a software patch. The administrator
or User naturally wants to apply the patch and then make sure existing
Instances start using it. Whether a software update is involved or not, it's
also possible to simply switch an Instance from its current Template to any
other desired Template.
To change an Instance's base image, call the restoreVirtualMachine API command
and pass in the Instance ID and a new Template ID. The Template
ID parameter may refer to either a Template or an ISO, depending on
which type of base image the Instance was already using (it must match the
previous type of image). When this call occurs, the Instance's root disk is
first destroyed, then a new root disk is created from the source
designated in the Template ID parameter. The new root disk is attached
to the Instance, and now the Instance is based on the new Template.
You can also omit the Template ID parameter from the
restoreVirtualMachine call. In this case, the Instance's root disk is
destroyed and recreated, but from the same Template or ISO that was
already in use by the Instance.
Instance Lease
--------------
CloudStack offers the option to create Instances with a Lease. A Lease defines a set time period after which a selected action,
such as stopping or destroying the instance, will be automatically performed. This helps optimize cloud resource usage by automatically
freeing up resources that are no longer in use.
If a user needs an instance only for a limited time, this option can be very helpful.
When deploying an instance, users can either choose a Compute Offering that includes Instance Lease support or enable it specifically for that instance,
setting the number of days after which the instance should be stopped or destroyed once their task is complete.
**Configuring Instance Lease feature**
The cloud administrator can use global configuration variables to control the behavior of Instance Lease.
To set these variables, API or CloudStack UI can be used:
======================================= ========================
Configuration Description
======================================= ========================
instance.lease.enabled Indicates whether to enable the Instance Lease feature, will be applicable only on instances created after lease is enabled. **Default: false**
instance.lease.scheduler.interval Background task interval in seconds that executes Lease expiry action on eligible expired instances. Default: 3600.
instance.lease.eventscheduler.interval Background task interval in seconds that executes Lease event executor for instances about to be expired in next N days. Default: 86400
instance.lease.expiryevent.daysbefore Denotes number of days (N) in advance expiry events are generated for instance about to expire. Default: 7 days
======================================= ========================
.. note:: it is recommended to configure the lowest possible value (in secs) for **instance.lease.scheduler.interval**, so that lease expiry action is taken as soon as lease is expired.
**Lease Parameters**
**leaseduration**: Lease duration is specified in days. This can take Natural numbers (>=1) and -1 to disable the lease. Max supported value is 36500 (100 years).
User can disable Lease for instance in two ways:
- Disable the Instance Lease during instance deployment by unchecking the 'Enable Lease' option when using a Compute Offering that supports it.
- For existing instances with a lease already enabled, it can be removed by editing the instance and unchecking the 'Enable Lease' option.
**leaseexpiryaction**: There are two expiry action supported:
- STOP: The instance is stopped, and it will be out of lease. The user can restart the instance manually.
- DESTROY: The instance is destroyed when the lease expires.
.. note:: Expiry action is executed at most once on the instance, e.g. STOP action will bring instance in Stopped state on expiry and instance will be out of lease. User may choose to start it again.
**Using Instance Lease**
Lease information is associated to an Instance and following parameters are used to enable lease for it:
#. leaseduration
#. leaseexpiryaction
Instance remains active for specified leaseduration (in days). Upon lease expiry, configured expiryaction is executed on the instance and
lease is removed from the instance for any further action.
**Notes:**
#. Lease Assignment: A lease can only be assigned to an instance during deployment.
#. Lease Acquisition: Instances without a lease cannot acquire one by switching to a different Compute Offering or by editing the instance.
#. Lease Inheritance: Instances inherit the lease from a Compute Offering with 'Instance Lease' feature enabled. This lease can be overridden or disabled in the “Advanced Settings”.
#. Lease Persistence: A lease is always tied to the instance. Modifications to the Compute Offering do not affect the instance's lease.
#. Non-Lease Compute Offering: Instances can have a lease by enabling it in the "Advanced Settings" for non-lease based Compute Offering too.
#. Lease Duration Management: The lease duration can be extended or reduced for instances before expiry. However, once the lease is disabled, it cannot be re-enabled for that instance.
#. Lease Expiry: Once the lease expires and the associated action is completed, the lease is annulled and cannot be reattached or extended.
#. Feature Disablement: If the lease feature is disabled, the lease associated with instances is canceled. Re-enabling the feature will not automatically reapply the lease to previously grandfathered instances.
#. Delete Protection: The DESTROY lease expiry action is skipped for instances with delete protection enabled.
**Deployment of Instance with lease**
There are 2 ways to deploy instance with lease from UI:
1. Use Compute Offering which has 'Instance Lease' feature enabled.
.. image:: /_static/images/deploy_instance_lease_offering.png
:width: 400px
:align: center
:alt: Deploy Instance with lease compute offering dialog box
2. Enable lease under Advance settings during instance Deployment
.. image:: /_static/images/deploy_instance_advanced_lease.png
:width: 400px
:align: center
:alt: Deploy Instance with lease using advance settings
**Using API**
Pass lease parameters in the command to enable lease during instance deployment:
.. code:: bash
cmk deploy virtualmachine name=..... leaseduration=... leaseexpiryaction=...
- Use Compute Offering with lease
.. code:: bash
cmk deploy virtualmachine name=..... serviceofferingid=lease-compute-offering
**Editing Instance Lease**
The lease duration for an instance can be extended, reduced, or disabled for instances that already have an active lease.
However, it is not possible to enable the lease on an instance after it has already been deployed.
From UI:
.. image:: /_static/images/edit_instance_lease.png
:width: 400px
:align: center
:alt: Edit Instance Lease dialog
Using API:
.. code:: bash
cmk update virtualmachine id=fa970d19-8340-455c-a9fb-569205954fdc leaseduration=20 leaseexpiryaction=DESTROY
To disable lease using API:
.. code:: bash
cmk update virtualmachine id=fa970d19-8340-455c-a9fb-569205954fdc leaseduration=-1
.. note:: DESTROY action will ignore instance if deleteprotection is enabled for it.
.. note:: When the feature is disabled, the lease associated with instances is cancelled. Re-enabling the feature will not automatically reapply the lease to previously grandfathered instances.
.. note:: Lease duration is considered as total lease for instance.
**Instance Lease Events**
Lease feature generates various events to help in auditing and monitoring:
=================== ========================
Event Type Description
=================== ========================
VM.LEASE.EXPIRED Event is generated at lease expiry
VM.LEASE.DISABLED Denotes if lease is disabled by user/admin
VM.LEASE.CANCELLED When lease is cancelled (feature gets disabled)
VM.LEASE.EXPIRING Expiry intimation event for instance
=================== ========================
Advanced Instance Settings
--------------------------
Each User Instance has a set of "details" associated with it (as visible via listVirtualMachine API call) - those "details" are shown on the "Settings" tab of the Instance in the GUI (words "setting(s)" and "detail(s)" are here used interchangeably).
The Settings tab is always present/visible, but settings can be changed only when the Instance is in a Stopped state.
Some Instance details/settings can be hidden for users via "user.vm.denied.details" global setting. Instance details/settings can also be made read-only for users using "user.vm.readonly.details" global setting. List of default hidden and read-only details/settings is given below.
.. note::
Since version 4.15, VMware Instance settings for the ROOT disk controller, NIC adapter type and data disk controller are populated automatically with the values inherited from the Template.
When adding a new setting or modifying the existing ones, setting names are shown/offered in a drop-down list, as well as their possible values (with the exception of boolean or numerical values).
Details/settings that are hidden for users by default:
- rootdisksize
- cpuOvercommitRatio
- memoryOvercommitRatio
- Message.ReservedCapacityFreed.Flag
Details/settings that are read-only for users by default:
- dataDiskController
- rootDiskController
An example list of settings as well as their possible values are shown on the images below:
|vm-settings-dropdown-list.png|
(VMware hypervisor)
|vm-settings-values-dropdown-list.png|
(VMware disk controllers)
|vm-settings-values1-dropdown-list.png|
(VMware NIC models)
|vm-settings-values-dropdown-KVM-list.png|
(KVM disk controllers)
|vm-settings-kvm-guest-cpu-model.png|
(KVM guest CPU model, available for root admin since 4.20.1.0)
CloudStack supports setting the guest machine type for KVM instances since 4.22.0 by using the instance setting 'kvm.guest.os.machine.type'. The list of supported machine types will depend on the QEMU version on the KVM host.
.. note::
For Ubuntu 24 KVM hosts (and other distros containing QEMU 8.x versions) setting the machine type for Windows VMs to 'pc-i440fx-8.0' mitigates the issue which prevents retrieving the instance UUID from within the guest VM via: `wmic path win32_computersystemproduct get uuid`.
Instance Settings for Virtual Trusted Platform Module (vTPM)
-----------------------------
Trusted Platform Module (TPM) is a standard for a secure cryptoprocessor, which
can securely store artifacts used to authenticate the platform, including passwords,
certificates, or encryption keys. TPM is required by recent Windows releases.
Virtual Trusted Platform Module (vTPM) is the software-based representation of physical TPM.
CloudStack supports vTPM for instances running on KVM and VMware since 4.20.1.0 .
|vm-settings-uefi-secure.png|
UEFI setting
- On Vmware, the boot type must be set to UEFI. Boot mode can be SECURE (recommended) or LEGACY.
- On KVM, it is recommended to set boot type to UEFI, and boot mode to SECURE.
- UEFI is required for some Windows versions.
|vm-settings-virtual-tpm-model-kvm.png|
TPM model for KVM. There are two options:
- tpm-tis, TIS means TPM Interface Specification;
- tpm-crb, CRB means Command-Response Buffer.
|vm-settings-virtual-tpm-version-kvm.png|
TPM version for KVM. There are two options:
- 2.0. This is the default TPM version. It is used when version is not specified or invalid.
- 1.2. This is not supported with CRB model.
|vm-settings-virtual-tpm-enabled-vmware.png|
Enable or disable vTPM for VMware.
Instance Snapshots
==================
(Supported on VMware, XenServer and KVM (NFS only))
In addition to the existing CloudStack ability to snapshot individual Instance
volumes, you can take an Instance Snapshot to preserve all the Instance's data
volumes as well as (optionally) its CPU/memory state. This is useful for
quick restore of an Instance. For example, you can snapshot an Instance, then make
changes such as software upgrades. If anything goes wrong, simply
restore the Instance to its previous state using the previously saved Instance
Snapshot.
The Snapshot is created using the hypervisor's native Snapshot facility.
The Instance Snapshot includes not only the data volumes, but optionally also
whether the Instance is running or turned off (CPU state) and the memory
contents. The Snapshot is stored in CloudStack's primary storage.
Instance Snapshots can have a parent/child relationship. Each successive
Snapshot of the same Instance is the child of the Snapshot that came before
it. Each time you take an additional Snapshot of the same Instance, it saves
only the differences between the current state of the Instance and the state
stored in the most recent previous Snapshot. The previous Snapshot
becomes a parent, and the new Snapshot is its child. It is possible to
create a long chain of these parent/child Snapshots, which amount to a
"redo" record leading from the current state of the Instance back to the
original.
After Instance Snapshots are created, they can be tagged with a key/value pair,
like many other resources in CloudStack.
KVM supports Instance Snapshots when using NFS shared storage. If raw block storage
is used (i.e. Ceph), then Instance Snapshots are not possible, since there is no possibility
to write RAM memory content anywhere. In such cases you can use as an alternative
:ref:`Storage-based-Instance-Snapshots-on-KVM`.
If you need more information about Instance Snapshots on VMware, check out the
VMware documentation and the VMware Knowledge Base, especially
`Understanding Instance Snapshots
<http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1015180>`_.
.. _Storage-based-Instance-Snapshots-on-KVM:
Storage-based Instance Snapshots on KVM
---------------------------------------
.. note::
For now this functionality is limited for NFS and Local storage.
CloudStack introduces a new Storage-based Instance Snapshots on KVM feature that provides
crash-consistent Snapshots of all disks attached to the Instance. It employs the underlying storage
providers’ capability to create/revert/delete disk Snapshots. Consistency is obtained by freezing
the Instance before the snapshotting. Memory Snapshots are not supported.
.. note::
``freeze`` and ``thaw`` of Instance is maintained by the guest agent.
``qemu-guest-agent`` has to be installed in the Instance.
When the snapshotting is complete, the Instance is thawed.
You can use this functionality on Instances with raw block storages (E.g. Ceph/SolidFire/Linstor).
.. _Disk-only-File-based-Storage-Instance-Snapshots-on-KVM:
Disk-only File-based Storage Instance Snapshot on KVM
-----------------------------------------------------
Since version 4.21, CloudStack supports incremental disk-only instance snapshots for VMs on KVM that are running on file-based storages (NFS, local, shared mount point).
Different from :ref:`Storage-based-Instance-Snapshots-on-KVM`, the VM is not frozen by default; only if the ``quiescevm`` parameter is provided. Furthermore, if ``quiescevm`` is true
the VM is only frozen during the operation of creating the deltas on the volumes of the VM, thus the downtime is minimal.
When using this snapshot strategy, you will not be able to create volume snapshots, as these two features are not compatible. If you want to use both volume snapshots and instance snapshots
at the same time, you may inform the value ``KvmFileBasedStorageVmSnapshotStrategy`` on the ``vmSnapshot.strategies.exclude`` configuration, so that
this strategy is not used and the :ref:`Storage-based-Instance-Snapshots-on-KVM` feature is used instead.
More information on this feature may be found in the `specification <https://github.com/apache/cloudstack/issues/9524>`_.
Limitations on Instance Snapshots
---------------------------------
- If an Instance has some stored Snapshots, you can't attach new volume to the
Instance or delete any existing volumes. If you change the volumes on the
Instance, it would become impossible to restore the Instance Snapshot which was
created with the previous volume structure. If you want to attach a
volume to such an Instance, first delete its Snapshots.
- Instance Snapshots which include both data volumes and memory can't be kept
if you change the Instance's service offering. Any existing Instance Snapshots of
this type will be discarded.
- You can't make an Instance Snapshot at the same time as you are taking a
Volume Snapshot.
- You should use only CloudStack to create Instance Snapshots on hosts
managed by CloudStack. Any Snapshots that you make directly on the
hypervisor will not be tracked in CloudStack.
Pause During Live Instance Snapshots on KVM
-------------------------------------------
When creating **Instance Snapshots with Memory**, CloudStack uses Libvirt’s
*domain snapshot* API to create an Internal Snapshot that includes Memory.
The guest’s memory state is written directly into the root volume’s QCOW2 file.
This causes the instance to pause for the duration of the memory dump. The pause
time is typically much longer than with VMware snapshots, but this is a limitation
with Internal Snapshots in Libvirt.
**Instance Snapshots without Memory** has seen significant improvements since Cloudstack 4.21 with the
:ref:`Disk-only-File-based-Storage-Instance-Snapshots-on-KVM` feature for NFS and local storage.
Pre 4.21, the Instance would be frozen for the entire duration of the snapshot create operation.
Since 4.21, the Instance is only frozen during the checkpointing operation, which is significantly less.
Users looking for the Instance Snapshot feature in KVM are recommended to use the
:ref:`Disk-only-File-based-Storage-Instance-Snapshots-on-KVM` feature, if the pause duration is a concern.
App consistent snapshots can be created by using the ``quiescevm`` parameter with pre and post-freeze hooks.
The Instance should have Qemu Guest Agent installed for this to work.
Configuring Instance Snapshots
------------------------------
The cloud administrator can use global configuration variables to
control the behavior of Instance Snapshots. To set these variables, go through
the Global Settings area of the CloudStack UI.
.. cssclass:: table-striped table-bordered table-hover
================================= ========================
Configuration Description
================================= ========================
vmsnapshots.max The maximum number of Instance Snapshots that can be saved for any given Instance in the cloud. The total possible number of Instance Snapshots in the cloud is (number of Instances) \* vmsnapshots.max. If the number of Snapshots for any Instance ever hits the maximum, the older ones are removed by the Snapshot expunge job.
vmsnapshot.create.wait Number of seconds to wait for a Snapshot job to succeed before declaring failure and issuing an error.
kvm.vmstoragesnapshot.enabled For live Snapshot of an Instance on KVM hypervisor without memory. Requires qemu version 1.6+ (on NFS or Local file system) and qemu-guest-agent installed on guest Instance
================================= ========================
Using Instance Snapshots
------------------------
To create an Instance Snapshot using the CloudStack UI:
#. Log in to the CloudStack UI as a user or administrator.
#. Click Instances.
#. Click the name of the Instance you want to Snapshot.
#. Click the Take Instance Snapshot button. |VMSnapshotButton.png|
.. note::
If a Snapshot is already in progress, then clicking this button
will have no effect.
#. Provide a name and description. These will be displayed in the Instance
Snapshots list.
#. (For running Instances only) If you want to include the Instance's memory in the
Snapshot, click the Memory checkbox. This saves the CPU and memory
state of the Instance. If you don't check this box, then only
the current state of the Instance disk is saved. Checking this box makes
the Snapshot take longer.
#. Quiesce Instance: check this box if you want to quiesce the file system on
the Instance before taking the Snapshot. Not supported on XenServer when
used with CloudStack-provided primary storage.
When this option is used with CloudStack-provided primary storage,
the quiesce operation is performed by the underlying hypervisor
(VMware is supported). When used with another primary storage
vendor's plugin, the quiesce operation is provided according to the
vendor's implementation.
#. Click OK.
To delete a Snapshot or restore an Instance to the state saved in a particular
Snapshot:
#. Navigate to the Instance as described in the earlier steps.
#. Click View Instance Snapshots.
#. In the list of Snapshots, click the name of the Snapshot you want to
work with.
#. Depending on what you want to do:
To delete the Snapshot, click the Delete button. |delete-button.png|
To revert to the Snapshot, click the Revert button. |revert-vm.png|
.. note::
Instance Snapshots are deleted automatically when an Instance is destroyed. You don't
have to manually delete the Snapshots in this case.
Support for Virtual Appliances
==============================
.. include:: virtual_machines/virtual_appliances.rst
Importing and Unmanaging Instances
==================================
In the UI, both unmanaged and managed virtual machines or instances are listed in *Tools > Import-Export Instances* section, selecting:
.. cssclass:: table-striped table-bordered table-hover
==================== ========================
Source Destination Hypervisor
==================== ========================
Unmanaged Instance VMware
==================== ========================
|vm-unmanagedmanaged.png|
.. include:: ./virtual_machines/importing_unmanaging_vms.rst
Importing Virtual Machines From VMware into KVM
===============================================
.. include:: ./virtual_machines/importing_vmware_vms_into_kvm.rst
Instance Backups (Backup and Recovery Feature)
==============================================
.. include:: backup_and_recovery.rst
Using SSH Keys for Authentication
=================================
In addition to the username and password authentication, CloudStack
supports using SSH keys to log in to the cloud infrastructure for
additional security. You can use the createSSHKeyPair API to generate
the SSH keys.
Because each cloud user has their own SSH key, one cloud user cannot log
in to another cloud user's Instances unless they share their SSH key
files. Using a single SSH key pair, you can manage multiple Instances.
Creating an Instance Template that Supports SSH Keys
----------------------------------------------------
Create an Instance Template that supports SSH Keys.
#. Create a new Instance by using the Template provided by cloudstack.
For more information on creating a new Instance, see
#. Download the cloudstack script from `The SSH Key Gen Script
<http://sourceforge.net/projects/cloudstack/files/SSH%20Key%20Gen%20Script/>`_
to the Instance you have created.
.. parsed-literal::
wget http://downloads.sourceforge.net/project/cloudstack/SSH%20Key%20Gen%20Script/cloud-set-guest-sshkey.in?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fcloudstack%2Ffiles%2FSSH%2520Key%2520Gen%2520Script%2F&ts=1331225219&use_mirror=iweb
#. Copy the file to /etc/init.d.
.. parsed-literal::
cp cloud-set-guest-sshkey.in /etc/init.d/
#. Give the necessary permissions on the script:
.. parsed-literal::
chmod +x /etc/init.d/cloud-set-guest-sshkey.in
#. Run the script while starting up the operating system:
.. parsed-literal::
chkconfig --add cloud-set-guest-sshkey.in
#. Stop the Instance.
Creating the SSH Keypair
------------------------
#. Log in to the CloudStack UI.
#. In the left navigation bar, click Compute --> SSH Key Pairs.
#. Click Create a SSH Key Pair.
#. In the dialog, make the following choices:
- **Name**: Any desired name for the SSH Key Pair.
- **Public key**: (Optional) Public key material of the SSH Key Pair.
.. note:: If this field is filled in, CloudStack will register the public key. If this field is left blank, CloudStack will create a new SSH key pair.
- **Domain**: (Optional) domain for the SSH Key Pair.
.. note:: If Cloudstack generates a New SSH Key Pair using a public key, it will not save the private key. When shown, be sure to save a copy of it.
You can also use the ``createSSHKeyPair`` api method to create an SSH Keypair. You can either
use the CloudStack Python API library or the curl commands to make the
call to the cloudstack api.
For example, make a call from the cloudstack server to create a SSH
keypair called "keypair-doc" for the admin Account in the root domain:
.. note::
Ensure that you adjust these values to meet your needs. If you are
making the API call from a different server, your URL/PORT will be
different, and you will need to use the API keys.
#. Run the following curl command:
.. parsed-literal::
curl --globoff "http://localhost:8096/?command=createSSHKeyPair&name=keypair-doc&account=admin&domainid=5163440e-c44b-42b5-9109-ad75cae8e8a2"
The output is something similar to what is given below:
.. parsed-literal::
<?xml version="1.0" encoding="ISO-8859-1"?><createsshkeypairresponse cloud-stack-version="3.0.0.20120228045507"><keypair><name>keypair-doc</name><fingerprint>f6:77:39:d5:5e:77:02:22:6a:d8:7f:ce:ab:cd:b3:56</fingerprint><privatekey>-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci
dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB
AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu
mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy
QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7
VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK
lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm
nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14
4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd
KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
-----END RSA PRIVATE KEY-----
</privatekey></keypair></createsshkeypairresponse>
#. Copy the key data into a file. The file looks like this:
.. parsed-literal::
-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci
dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB
AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu
mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy
QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7
VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK
lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm
nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14
4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd
KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
-----END RSA PRIVATE KEY-----
#. Save the file.
Creating an Instance
--------------------
After you save the SSH keypair file, you must create an Instance by
using the Template that you created at `Section 5.2.1, “ Creating an
Instance Template that Supports SSH Keys” <#create-ssh-template>`__.
Ensure that you use the same SSH key name that you created at
`Section 5.2.2, “Creating the SSH Keypair” <#create-ssh-keypair>`__.
.. note::
You cannot create the Instance by using the GUI at this time and
associate the Instance with the newly created SSH keypair.
A sample curl command to create a new Instance is:
.. parsed-literal::
curl --globoff http://localhost:<port number>/?command=deployVirtualMachine\&zoneId=1\&serviceOfferingId=18727021-7556-4110-9322-d625b52e0813\&templateId=e899c18a-ce13-4bbf-98a9-625c5026e0b5\&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5\&account=admin\&domainid=1\&keypair=keypair-doc
Substitute the Template, service offering and security group IDs (if you
are using the security group feature) that are in your cloud
environment.
Logging In Using the SSH Keypair
---------------------------------
To test your SSH key generation is successful, check whether you can log
in to the cloud setup.
For example, from a Linux OS, run:
.. parsed-literal::
ssh -i ~/.ssh/keypair-doc <ip address>
The -i parameter tells the ssh client to use a ssh key found at
~/.ssh/keypair-doc.
Resetting SSH Keys
------------------
A lost or compromised SSH keypair can be changed, and the user can access the Instance by using the new keypair.
#. Log in to the CloudStack UI.
#. In the left navigation bar, click Compute --> Instances.
#. Choose the Instance.
#. Click on Reset SSH Key Pair button the Instance.
.. note:: The Instance must be in a Stopped state.
#. Select the SSH Key Pair(s) to add to instance
.. note:: This can also be performed via API: ``resetSSHKeyForVirtualMachine``: Resets the assigned SSH keypair for an Instance.
.. include:: virtual_machines/user-data.rst
Assigning GPU/vGPU to Guest Instances
=====================================
CloudStack can deploy guest Instances with Graphics Processing Unit (GPU) or Virtual
Graphics Processing Unit (vGPU) capabilities on XenServer hosts. At the time of
Instance deployment or at a later stage, you can assign a physical GPU ( known as
GPU-passthrough) or a portion of a physical GPU card (vGPU) to a guest Instance by
changing the Service Offering. With this capability, the Instances running on
CloudStack meet the intensive graphical processing requirement by means of the
high computation power of GPU/vGPU, and CloudStack users can run multimedia
rich applications, such as Auto-CAD, that they otherwise enjoy at their desk on
a virtualized environment.
For KVM, CloudStack leverages libvirt's PCI passthrough feature to assign a
physical GPU to a guest Instance. For vGPU profiles, depending on the vGPU type,
CloudStack uses mediated devices or Virtual Functions(VF) to assign a virtual
GPU to a guest Instance. It's the responsibility of the operator to ensure that
GPU devices are in correct state and are available for use on the host. If the
operator wants to use vGPU profiles, they need to ensure that the vGPU type is
supported by the host and has been created on the host.
For XenServer, CloudStack leverages the XenServer support for NVIDIA GRID
Kepler 1 and 2 series to run GPU/vGPU enabled Instances.
Some NVIDIA cards allow sharing a single GPU card among multiple Instances by
creating vGPUs for each Instance. With vGPU technology, the graphics commands
from each Instance are passed directly to the underlying dedicated GPU, without
the intervention of the hypervisor. This allows the GPU hardware to be
time-sliced and shared across multiple Instances. The GPU cards are used in the
following ways:
**passthrough**: GPU passthrough represents a physical GPU which can be
directly assigned to an Instance. GPU passthrough can be used on a hypervisor alongside
GRID vGPU, with some restrictions: A GRID physical GPU can either host GRID
vGPUs or be used as passthrough, but not both at the same time.
**vGPU**: vGPU enables multiple Instances to share a single physical GPU.
The Instances run an NVIDIA driver stack and get direct access to the GPU. GRID
physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs)
that can be assigned directly to guest Instances. Guest Instances use vGPUs in
the same manner as a physical GPU that has been passed through by the
hypervisor: an NVIDIA driver loaded in the guest Instance provides direct access to
the GPU for performance-critical fast paths, and a paravirtualized interface to
the NVIDIA vGPU Manager, which is used for nonperformant management
operations. NVIDIA vGPU Manager for XenServer runs in dom0.
CloudStack provides you with the following capabilities:
- Adding hosts with GPU/vGPU capability provisioned by the administrator.
(Supports only XenServer & KVM)
- Creating a Compute Offering with GPU/vGPU capability. For KVM, it is possible to
specify the GPU count and whether to use the GPU for display. For XenServer,
GPU count is simply ignored and only one device is assigned to the guest Instance.
- Deploying an Instance with GPU/vGPU capability.
- Destroying an Instance with GPU/vGPU capability.
- Allowing a user to add GPU/vGPU support to an Instance without GPU/vGPU support by
changing the Service Offering and vice-versa.
- Migrating Instances (cold migration) with GPU/vGPU capability.
- Managing GPU cards capacity.
- Querying hosts to obtain information about the GPU cards, supported vGPU types
in case of GRID cards, and capacity of the cards.
- Limit an account/domain/project to use a certain number of GPUs.
Prerequisites and System Requirements
-------------------------------------
Before proceeding, ensure that you have these prerequisites:
- CloudStack does not restrict the deployment of GPU-enabled Instances with
guest OS types that are not supported for GPU/vGPU functionality. The deployment
would be successful and a GPU/vGPU will also get allocated for Instances; however,
due to missing guest OS drivers, Instance would not be able to leverage GPU resources.
Therefore, it is recommended to use GPU-enabled service offering only with supported guest OS.
- NVIDIA GRID K1 (16 GiB video RAM) AND K2 (8 GiB of video RAM) cards supports
homogeneous virtual GPUs, implies that at any given time, the vGPUs resident on
a single physical GPU must be all of the same type. However, this restriction
doesn't extend across physical GPUs on the same card. Each physical GPU on a
K1 or K2 may host different types of virtual GPU at the same time. For example,
a GRID K2 card has two physical GPUs, and supports four types of virtual GPU;
GRID K200, GRID K220Q, GRID K240Q, AND GRID K260Q.
- NVIDIA driver must be installed to enable vGPU operation as for a physical NVIDIA GPU.
For XenServer:
- the vGPU-enabled XenServer 6.2 and later versions.
For more information, see `Citrix 3D Graphics Pack <https://www.citrix.com/go/private/vgpu.html>`_.
- GPU/vGPU functionality is supported for following HVM guest operating systems:
For more information, see `Citrix 3D Graphics Pack <https://www.citrix.com/go/private/vgpu.html>`_.
- Windows 7 (x86 and x64)
- Windows Server 2008 R2
- Windows Server 2012
- Windows 8 (x86 and x64)
- Windows 8.1 ("Blue") (x86 and x64)
- Windows Server 2012 R2 (server equivalent of "Blue")
- XenServer tools are installed in the Instance to get maximum performance on
XenServer, regardless of type of vGPU you are using. Without the optimized
networking and storage drivers that the XenServer tools provide, remote
graphics applications running on GRID vGPU will not deliver maximum performance.
- To deliver high frames from multiple heads on vGPU, install XenDesktop with
HDX 3D Pro remote graphics.
Before continuing with configuration, consider the following:
- Deploying Instances with GPU/vGPU capability is not supported if hosts are
not available with enough GPU capacity.
- Dynamic scaling is not supported. However, you can choose to deploy an
Instance without GPU support, and at a later point, you can change the system
offering to upgrade to the one with vGPU. You can achieve this by offline
upgrade: stop the Instance, upgrade the Service Offering to the one with
vGPU, then start the Instance.
- Live migration of GPU/vGPU enabled Instance is not supported.
- Disabling GPU at Cluster level is not supported.
- Notification thresholds for GPU resource is not supported.
Supported GPU Devices for XenServer
-----------------------------------
.. cssclass:: table-striped table-bordered table-hover
=========== ========================
Device Type
=========== ========================
GPU - Group of NVIDIA Corporation GK107GL [GRID K1] GPUs
- Group of NVIDIA Corporation GK104GL [GRID K2] GPUs
- Any other GPU Group
vGPU - GRID K100
- GRID K120Q
- GRID K140Q
- GRID K200
- GRID K220Q
- GRID K240Q
- GRID K260Q
=========== ========================
GPU/vGPU Assignment Workflow
-----------------------------
CloudStack follows the below sequence of operations to provide GPU/vGPU support for Instances:
#. Ensure that the host is ready with GPU installed and configured.
- For more information for XenServer, see `XenServer Documentation <https://docs.xenserver.com/en-us/citrix-hypervisor/graphics/hv-graphics-config>`_.
- For KVM, to configure the host see how to `discover GPU Devices on Hosts here <hosts.html#discovering-gpu-devices-on-kvm-hosts>`_.
#. Add the host to CloudStack.
CloudStack checks if the host is GPU-enabled or not. CloudStack queries the host and detect if it's GPU enabled.
#. Create a compute offering with GPU/vGPU support:
For more information, see `Creating a New Compute Offering <service_offerings.html#creating-a-new-compute-offering>`_.
#. Continue with any of the following operations:
- Deploy an Instance.
Deploy an Instance with GPU/vGPU support by selecting appropriate Service Offering. CloudStack decide which host to choose for Instance deployment based on following criteria:
- Host has GPU cards in it. In case of vGPU, CloudStack checks if cards have the required vGPU type support and enough capacity available. Having no appropriate hosts results in an InsufficientServerCapacity exception.
- Alternately, you can choose to deploy an Instance without GPU support, and at a later point, you can change the system offering. You can achieve this by offline upgrade: stop the Instance, upgrade the Service Offering to the one with vGPU, then start the Instance.
In this case, CloudStack gets a list of hosts which have enough capacity to host the Instance. If there is a GPU-enabled host, CloudStack reorders this host list and place the GPU-enabled hosts at the bottom of the list.
- Migrate an Instance.
CloudStack searches for hosts available for Instance migration, which satisfies GPU requirement. If the host is available, stop the Instance in the current host and perform the Instance migration task. If the Instance migration is successful, the remaining GPU capacity is updated for both the hosts accordingly.
- Destroy an Instance.
GPU resources are released automatically when you stop an Instance. Once the destroy Instance is successful, CloudStack will make a resource call to the host to get the remaining GPU capacity in the card and update the database accordingly.
Instance Metrics
================
Instance statistics are collected on a regular interval (defined by global
setting vm.stats.interval with a default of 60000 milliseconds).
Instance statistics include compute, storage and Network statistics.
Instance statistics are stored in the database as historical data for a desired time period. These historical statistics then can be retrieved using ``listVirtualMachinesUsageHistory`` API. For system VMs, the same historical statistics can be retrieved using ``listSystemVmsUsageHistory`` API
Instance statistics retention time in the database is controlled by the global configuration ``vm.stats.max.retention.time``, with a default value of 720 minutes, i.e., 12 hours. The interval in which the metrics are retrieved are defined by the global configuration ``vm.stats.interval``, which has a default value of 60,000 milliseconds, i.e., 1 minute. The default values are only meant for guideline, as they can have a major impact in DB performance. The equation below presents the overall storage size required considering the values of these configurations.
.. math::
StatsSize = (\frac{retention * 60000}{interval}) * nodes * VMs * registrySize
- **StatsSize**: the size, in `bytes`, required for storing the VM stats;
- **retention**: the value of the configuration ``vm.stats.max.retention.time``;
- **interval**: the value of the configuration ``vm.stats.interval``;
- **nodes**: the number of nodes running the management server in the environment;
- **VMs**: the number of running VMs in the environment;
- **registrySize**: the estimated size, in `bytes`, of the registry in the DB;
Considering the default values of the configurations ``vm.stats.max.retention.time`` and ``vm.stats.interval``, three nodes running the management server, 10,000 running VMs and an estimated registry size of 400 bytes, it would need, approximately, 8 GB of storage to store VM stats. Therefore, the values for these configurations should be changed considering the CloudStack environment, evaluating the required storage and its impact in DB performance.
Another global configuration that affects Instance statistics is ``vm.stats.user.vm.only``. When set to 'false' stats for system VMs will be collected, otherwise stats collection will be done only for user Instances.
In the UI, historical Instance statistics are shown in the Metrics tab in an individual Instance view, as shown in the image below.
|vm-metrics-ui.png|
Instance Disk Metrics
---------------
Similar to Instance statistics, Instance disk statistics (disk stats) can also be collected on a regular interval (defined by global setting vm.disk.stats.interval with a default value of 0 seconds which disables disk stats collection). Disk stats are collected in form of diskiopstotal, diskioread, diskiowrite, diskkbsread and diskkbswrite.
Instance disk statistics can also be made to store in the database and the historical statistics can be retrieved using listVolumesUsageHistory API.
Instance disk statistics retention in the database is controlled by the global configuration - `vm.disk.stats.retention.enabled`. Default value is false, i.e., retention of Instance disk statistics is disabled. Other global configurations that affects Instance disk statistics are:
- `vm.disk.stats.interval.min` - Minimal interval (in seconds) to report Instance disk statistics. If vm.disk.stats.interval is smaller than this, use this to report Instance disk statistics.
- `vm.disk.stats.max.retention.time` - The maximum time (in minutes) for keeping disk stats records in the database. The disk stats cleanup process will be disabled if this is set to 0 or less than 0.
Instance disk statistics are shown in the Metrics tab in an individual volume view, as shown in the image below.
|vm-disk-metrics-ui.png|
.. note::
The metrics or statistics for VMs and VM disks in CloudStack depend on the
hypervisor plugin used for each hypervisor. The behavior can vary across
different hypervisors. For instance, with KVM, metrics are real-time
statistics provided by libvirt. In contrast, with VMware, the metrics are
averaged data based on the global configuration parameter
`vmware.stats.time.window` and a lower value for the configuration may help
observe statistics closer to the real-time values.
.. |vm-lifecycle.png| image:: /_static/images/vm-lifecycle.png
:alt: Instance State Model
.. |vm-schedule-tab.png| image:: /_static/images/vm-schedule-tab.png
:alt: Instance Schedule Tab
.. |vm-schedule-form.png| image:: /_static/images/vm-schedule-form.png
:alt: Instance Schedule Form
.. |VMSnapshotButton.png| image:: /_static/images/VMSnapshotButton.png
:alt: button to restart a VPC
.. |delete-button.png| image:: /_static/images/delete-button.png
.. |EditButton.png| image:: /_static/images/edit-icon.png
:alt: button to edit the properties of an Instance
.. |change-affinity-button.png| image:: /_static/images/change-affinity-button.png
:alt: button to assign an affinity group to an Instance.
.. |ChangeServiceButton.png| image:: /_static/images/change-service-icon.png
:alt: button to change the service of an Instance
.. |Migrateinstance.png| image:: /_static/images/migrate-instance.png
:alt: button to migrate an Instance
.. |Destroyinstance.png| image:: /_static/images/destroy-instance.png
:alt: button to destroy an Instance
.. |iso.png| image:: /_static/images/iso-icon.png
:alt: depicts adding an iso image
.. |console-icon.png| image:: /_static/images/console-icon.png
:alt: depicts adding an iso image
.. |revert-vm.png| image:: /_static/images/revert-vm.png
:alt: depicts adding an iso image
.. |StopButton.png| image:: /_static/images/stop-instance-icon.png
:alt: depicts adding an iso image
.. |vm-settings-dropdown-list.png| image:: /_static/images/vm-settings-dropdown-list.png
:alt: List of possible VMware settings
.. |vm-settings-values-dropdown-list.png| image:: /_static/images/vm-settings-values-dropdown-list.png
:alt: List of possible VMware disk controllers
.. |vm-settings-values1-dropdown-list.png| image:: /_static/images/vm-settings-values1-dropdown-list.png
:alt: List of possible VMware NIC models
.. |vm-settings-values-dropdown-KVM-list.png| image:: /_static/images/vm-settings-values-dropdown-KVM-list.png
:alt: List of possible KVM disk controllers
.. |vm-settings-kvm-guest-cpu-model.png| image:: /_static/images/vm-settings-kvm-guest-cpu-model.png
:alt: List of possible KVM guest CPU models
.. |vm-settings-uefi-secure.png| image:: /_static/images/vm-settings-uefi-secure.png
:alt: Set boot type to UEFI and mode to SECURE
.. |vm-settings-virtual-tpm-model-kvm.png| image:: /_static/images/vm-settings-virtual-tpm-model-kvm.png
:alt: List of TPM models for KVM
.. |vm-settings-virtual-tpm-version-kvm.png| image:: /_static/images/vm-settings-virtual-tpm-version-kvm.png
:alt: List of TPM versions for KVM
.. |vm-settings-virtual-tpm-enabled-vmware.png| image:: /_static/images/vm-settings-virtual-tpm-enabled-vmware.png
:alt: Enable vTPM or not for VMware
.. |vm-metrics-ui.png| image:: /_static/images/vm-metrics-ui.png
:alt: VM metrics UI
.. |vm-disk-metrics-ui.png| image:: /_static/images/vm-disk-metrics-ui.png
:alt: VM Disk metrics UI