%entities; ]> Virtualization limits and support yes 2025-09-02 &qemu; is only supported when used for virtualization together with the &kvm; or &xen; hypervisors. The TCG accelerator is not supported, even when it is distributed within &suse; products. Users must not rely on &qemu; TCG to provide guest isolation, or for any security guarantees. See also . Architecture support &kvm; hardware requirements &suse; supports &kvm; full virtualization on &x86-64;, &aarch64;, &zseries; and &linuxone; hosts. On the &x86-64; architecture, &kvm; is designed around hardware virtualization features included in AMD* (AMD-V) and Intel* (VT-x) CPUs. It supports virtualization features of chipsets and PCI devices, such as an I/O Memory Mapping Unit () and Single Root I/O Virtualization (). You can test whether your CPU supports hardware virtualization with the following command: &prompt.user;egrep '(vmx|svm)' /proc/cpuinfo If this command returns no output, your processor either does not support hardware virtualization, or this feature has been disabled in the BIOS or firmware. The following Web sites identify &x86-64; processors that support hardware virtualization: (for Intel CPUs), and (for AMD CPUs). On the &arm; architecture, &armv8;-A processors include support for virtualization. On the &arm; architecture, we only support running &qemu;/&kvm; via the CPU model host (it is named host-passthrough in &vmm; or &libvirt;). &kvm; kernel modules not loading The &kvm; kernel modules only load if the CPU hardware virtualization features are available. The general minimum hardware requirements for the &vmhost; are the same as outlined in . The general minimum hardware requirements for the &vmhost; are the same as for a physical machine. However, additional RAM for each virtualized guest is needed. It should at least be the same amount that is needed for a physical installation. It is also strongly recommended to have at least one processor core or hyper-thread for each running guest. &aarch64; &aarch64; is a continuously evolving platform. It does not have a traditional standards and compliance certification program to enable interoperability with operating systems and hypervisors. Ask your vendor for the support statement on &sls;. &power; Running &kvm; or &xen; hypervisors on the &power; platform is not supported. &xen; hardware requirements &suse; supports &xen; on &x86-64;. Hypervisor limits New features and virtualization limits for &xen; and &kvm; are outlined in the Release Notes for each Service Pack (SP). Only packages that are part of the official repositories for &sls; are supported. Conversely, all optional subpackages and plug-ins (for &qemu;, &libvirt;) provided at packagehub are not supported. For the maximum total virtual CPUs per host, see . The total number of virtual CPUs should be proportional to the number of available physical CPUs. 32-bit hypervisor With &productname; 11 SP2, we removed virtualization host facilities from 32-bit editions. 32-bit guests are not affected and are fully supported using the provided 64-bit hypervisor. Intel TDX Confidential Computing &slsa; 15 SP6 is the only service pack which includes kernel patches and tooling to Intel Host Confidential Computing technology in a dedicated module for the product. As this technology is not yet fully ready for a production environment, it has been provided as a technology preview. &slsa; 15 SP7 will not provide Host Intel TDX technology, therefore you should target &slsa; 16 which contains upstream TDX patches in the kernel. &kvm; limits Supported (and tested) virtualization limits of a &productname; &productnumber; host running Linux guests on &x86-64;. For other operating systems, refer to the specific vendor. &kvm; VM limits Maximum virtual CPUs per VM 768 Maximum memory per VM 4 TiB
&kvm; host limits are identical to &sls; (see the corresponding section of release notes), except for: Maximum virtual CPUs per VM: see recommendations in the Virtualization Best Practices Guide regarding over-commitment of physical CPUs at . The total virtual CPUs should be proportional to the available physical CPUs.
&xen; limits &xen; VM limits Maximum virtual CPUs per VM 64 (HVM Windows guest), 128 (trusted HVMs), or 512 (PV) Maximum memory per VM 2 TiB (64-bit guest), 16 GiB (32-bit guest with PAE)
&xen; host limits Maximum total physical CPUs 1024 Maximum total virtual CPUs per host See recommendations in the Virtualization Best Practices Guide regarding over-commitment of physical CPUs in . The total virtual CPUs should be proportional to the available physical CPUs. Maximum physical memory 16 TiB Suspend and hibernate modes Not supported.
Supported host environments (hypervisors) This section describes the support status of &productname; &productnumber; running as a guest operating system on top of different virtualization hosts (hypervisors). The following &suse; host environments are supported &sls; Hypervisors &sls; 12 SP5 Xen and KVM (&sls; 15 SP7 guest must use UEFI boot) &sls; 15 SP3 to SP&product-sp; Xen and KVM
The following third-party host environments are supported Citrix XenServer Nutanix Acropolis Hypervisor with AOS Oracle VM Server 3.4 Oracle Linux KVM 7, 8 VMware ESXi 6.7, 7.0 Windows Server 2019, 2022, 2025 You can also search in the SUSE YES certification database. The level of support is as follows Support for SUSE host operating systems is full L3 (both for the guest and host), according to the respective product lifecycle. &suse; provides full L3 support for &productname; guests within third-party host environments. Support for the host and cooperation with &productname; guests must be provided by the host system's vendor.
Supported guest operating systems This section lists the support status for guest operating systems virtualized on top of &productname; &productnumber; for &kvm; and &xen; hypervisors. &mswin; guests can be rebooted by &libvirt;/&virsh; only if paravirtualized drivers are installed in the guest. Refer to for more details on downloading and installing PV drivers. The following guest operating systems are fully supported (L3): &sls; 12 SP5 &sls; 15 SP2, 15 SP3, 15 SP4, 15 SP5, 15 SP6, 15 SP7 &slem; 5.1, 5.2, 5.3, 5.4, 5.5, 6.0 Windows Server 2016, 2019 Oracle Linux 6, 7, 8 (&kvm; hypervisor only) The following guest operating systems are supported as a technology preview (L2, fixes if reasonable): &sleda; 15 SP3 Windows 10 / 11 &redhat; and ¢os; guest operating systems are fully supported (L3) if the customer has purchased &sliberty;. Refer to the &sliberty; documentation at for the list of available combinations and supported releases. In other cases, they are supported on a limited basis (L2, fixes if reasonable). RHEL PV drivers Starting from RHEL 7.2, &redhat; removed &xen; PV drivers. All other guest operating systems In other combinations, L2 support is provided but fixes are available only if feasible. &suse; fully supports the host OS (hypervisor). The guest OS issues need to be supported by the respective OS vendor. If an issue fix involves both the host and guest environments, the customer needs to approach both &suse; and the guest VM OS vendor. All guest operating systems are supported both fully virtualized and paravirtualized. The exception is Windows systems, which are only supported fully virtualized (but they can use PV drivers: ), and OES operating systems, which are supported only paravirtualized. All guest operating systems are supported both in 32-bit and 64-bit environments, unless stated otherwise. Availability of paravirtualized drivers To improve the performance of the guest operating system, paravirtualized drivers are provided when available. Although they are not required, it is strongly recommended to use them. Starting with &sls; 12 SP2, we switched to a PVops kernel. We are no longer using a dedicated kernel-xen package: The kernel-default+kernel-xen on dom0 was replaced by the kernel-default package. The kernel-xen package on PV domU was replaced by the kernel-default package. The kernel-default+xen-kmp on HVM domU was replaced by kernel-default. For &sls; 12 SP1 and older (down to 10 SP4), the paravirtualized drivers are included in a dedicated kernel-xen package. The paravirtualized drivers are available as follows: &productname; Included in kernel &sls; 12 / 12 SP1 / 12 SP2 Included in kernel &sls; 11 / 11 SP1 / 11 SP2 / 11 SP3 / 11 SP4 Included in kernel &sls; 10 SP4 Included in kernel &redhat; Available since &rhel; 5.4. Starting from &rhel; 7.2, &redhat; removed the PV drivers. Windows &suse; has developed virtio-based drivers for Windows, which are available in the Virtual Machine Driver Pack (VMDP). For more information, see . Supported VM migration scenarios &productname; supports migrating a virtual machine from one physical host to another. Offline migration scenarios &suse; supports offline migration, powering off a guest VM, then moving it to a host running a different &slea; product, from &slea; 12 to &slea; 15 SPX. The following host operating system combinations are fully supported (L3) for migrating guests from one host to another: Supported offline migration guests Target &slsa; host 12 SP3 12 SP4 12 SP5 15 GA 15 SP1 15 SP2 15 SP3 15 SP4 15 SP5 15 SP6 15 SP7 Source &slsa; host 12 SP3 12 SP4 1 12 SP5 15 GA 15 SP1 15 SP2 15 SP3 15 SP4 15 SP5 15 SP6 15 SP7
Fully compatible and fully supported 1 Supported for &kvm; hypervisor only Not supported
Live migration scenarios This section lists support status of live migration scenarios when running virtualized on top of &slsa;. Also, refer to the supported . The following host operating system combinations are fully supported (L3 according to the respective product life cycle). Live migration &suse; always supports live migration of virtual machines between hosts running &slsa; with successive service pack numbers. For example, from &slsa; 15 SP4 to 15 SP5. &suse; strives to support live migration of virtual machines from a host running a service pack under LTSS to a host running a newer service pack, within the same major version of &sls;. For example, virtual machine migration from a &slsa; 12 SP2 host to a &slsa; 12 SP5 host. &suse; only performs minimal testing of LTSS-to-newer migration scenarios and recommends thorough on-site testing before attempting to migrate critical virtual machines. &xen; live migration Live migration between &slea; 11 and &slea; 12 is not supported because of the different tool stack, see the Release notes for more details. Supported live migration guests Target &slsa; host 12 SP4 12 SP5 15 GA 15 SP1 15 SP2 15 SP3 15 SP4 15 SP5 15 SP6 15 SP7 Source &slsa; host 12 SP3 12 SP4 1 12 SP5 15 GA 15 SP1 15 SP2 15 SP3 15 SP4 15 SP5 15 SP6 2
Fully compatible and fully supported 1 Supported for &kvm; hypervisor only 2 When available Not supported
Feature support Nested virtualization: tech preview Nested virtualization allows you to run a virtual machine inside another VM while still using hardware acceleration from the host. It has low performance and adds more complexity while debugging. Nested virtualization is normally used for testing purposes. In &sls;, nested virtualization is a technology preview. It is only provided for testing and is not supported. Bugs can be reported, but they are treated with low priority. Any attempt to live migrate or to save or restore VMs in the presence of nested virtualization is also explicitly unsupported. Post-copy live migration: tech preview Post-copy is a method to live migrate virtual machines that is intended to get VMs running as soon as possible on the destination host, and have the VM RAM transferred gradually in the background over time as needed. Under certain conditions, this can be an optimization compared to the traditional pre-copy method. However, this comes with a major drawback: An error occurring during the migration (especially a network failure) can cause the whole VM RAM contents to be lost. Therefore, we recommend using pre-copy only in production, while post-copy can be used for testing and experimentation in case losing the VM state is not a major concern. &xen; host (Dom0) Feature support—host (<literal>Dom0</literal>) Features &xen; Network and block device hotplugging Physical Virtual Virtual Virtual Intel* VT-x2: FlexPriority, FlexMigrate (migration constraints apply to dissimilar CPU architectures) Intel* VT-d2 (DMA remapping with interrupt filtering and queued invalidation) AMD* IOMMU (I/O page table with guest-to-host physical address translation)
Adding or removing physical CPUs at runtime is not supported The addition or removal of physical CPUs at runtime is not supported. However, virtual CPUs can be added or removed for each &vmguest; while offline.
Guest feature support Live migration of &xen; PV guests For live migration, both source and target system architectures need to match; that is, the processors (AMD* or Intel*) must be the same. Unless CPU ID masking is used, such as with Intel FlexMigration, the target should feature the same processor revision or a more recent processor revision than the source. If VMs are moved among different systems, the same rules apply for each move. To avoid failing optimized code at runtime or application start-up, source and target CPUs need to expose the same processor extensions. &xen; exposes the physical CPU extensions to the VMs transparently. To summarize, guests can be 32-bit or 64-bit, but the must be identical. Windows guest Hotplugging of virtual network and virtual block devices, and resizing, shrinking and restoring dynamic virtual memory are supported in &xen; and &kvm; only if PV drivers are being used (VMDP). Intel FlexMigration For machines that support Intel FlexMigration, CPU-ID masking and faulting allow for more flexibility in cross-CPU migration. For &kvm;, a detailed description of supported limits, features, recommended settings and scenarios, and other useful information is maintained in kvm-supported.txt. This file is part of the &kvm; package and can be found in /usr/share/doc/packages/qemu-kvm. Guest feature support for &xen; and &kvm; Features &xen; PV guest (DomU) &xen; FV guest &kvm; FV guest Virtual network and virtual block device hotplugging Virtual Virtual Dynamic virtual memory resize VM save and restore VM Live Migration [1] [1] VM snapshot Advanced debugging with GDBC Dom0 metrics visible to VM Memory ballooning &pciback; [2] AMD SEV [3]
Fully compatible and fully supported Not supported [1] See . [2] &netware; guests are excluded. [3] See