// Module included in the following assemblies: // // * networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc :_mod-docs-content-type: REFERENCE [id="nw-ovn-kubernetes-live-migration-about_{context}"] = Limited live migration to the OVN-Kubernetes network plugin overview The limited live migration method is the process in which the OpenShift SDN network plugin and its network configurations, connections, and associated resources, are migrated to the OVN-Kubernetes network plugin without service interruption. It is available for {product-title}, and is the preferred method for migrating from OpenShift SDN to OVN-Kubernetes. In the event that you cannot perform a limited live migration, you can use the offline migration method. [IMPORTANT] ==== Before you migrate your {product-title} cluster to use the OVN-Kubernetes network plugin, update your cluster to the latest z-stream release so that all the latest bug fixes apply to your cluster. ==== // Azure Red Hat OpenShift deployment types will needed added in .z stream. It is not available for hosted control plane deployment types. This migration method is valuable for deployment types that require constant service availability and offers the following benefits: * Continuous service availability * Minimized downtime * Automatic node rebooting * Seamless transition from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin Although a rollback procedure is provided, the limited live migration is intended to be a one-way process. include::snippets/sdn-deprecation-statement.adoc[] The following sections provide more information about the limited live migration method. [id="supported-platforms-live-migrating-ovn-kubernetes"] == Supported platforms when using the limited live migration method The following table provides information about the supported platforms for the limited live migration type. .Supported platforms for the limited live migration method [cols="1,1", options="header"] |=== | Platform | Limited Live Migration | Bare-metal hardware |✓ | {aws-first} |✓ | {gcp-first} |✓ | {ibm-cloud-name} |✓ | {azure-first} |✓ | {rh-openstack-first} |✓ | {vmw-first} |✓ | Nutanix |✓ |=== [NOTE] ==== Each listed platform supports installing an {product-title} cluster on installer-provisioned infrastructure and user-provisioned infrastructure. ==== [id="best-practices-live-migrating-ovn-kubernetes-network-provider_{context}"] == Best practices for limited live migration to the OVN-Kubernetes network plugin For a list of best practices when migrating to the OVN-Kubernetes network plugin with the limited live migration method, see link:https://access.redhat.com/solutions/7057169[Limited Live Migration from OpenShift SDN to OVN-Kubernetes]. [id="considerations-live-migrating-ovn-kubernetes-network-provider_{context}"] == Considerations for limited live migration to the OVN-Kubernetes network plugin Before using the limited live migration method to the OVN-Kubernetes network plugin, cluster administrators should consider the following information: * The limited live migration procedure is unsupported for clusters with OpenShift SDN multitenant mode enabled. * Egress router pods block the limited live migration process. They must be removed before beginning the limited live migration process. * During the migration, when the cluster is running with both OVN-Kubernetes and OpenShift SDN, multicast and egress IP addresses are temporarily disabled for both CNIs. Egress firewalls remains functional. * The migration is intended to be a one-way process. However, for users that want to rollback to OpenShift-SDN, migration from OpenShift-SDN to OVN-Kubernetes must have succeeded. Users can follow the same procedure below to migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin. * The limited live migration is not supported on HyperShift clusters. * OpenShift SDN does not support IPsec. After the migration, cluster administrators can enable IPsec. * OpenShift SDN does not support IPv6. After the migration, cluster administrators can enable dual-stack. * The OpenShift SDN plugin allows application of the `NodeNetworkConfigurationPolicy` (NNCP) custom resource (CR) to the primary interface on a node. The OVN-Kubernetes network plugin does not have this capability. * The cluster MTU is the MTU value for pod interfaces. It is always less than your hardware MTU to account for the cluster network overlay overhead. The overhead is 100 bytes for OVN-Kubernetes and 50 bytes for OpenShift SDN. + During the limited live migration, both OVN-Kubernetes and OpenShift SDN run in parallel. OVN-Kubernetes manages the cluster network of some nodes, while OpenShift SDN manages the cluster network of others. To ensure that cross-CNI traffic remains functional, the Cluster Network Operator updates the routable MTU to ensure that both CNIs share the same overlay MTU. As a result, after the migration has completed, the cluster MTU is 50 bytes less. * OVN-Kubernetes reserves the `100.64.0.0/16` and `100.88.0.0/16` IP address ranges. These subnets cannot be overlapped with any other internal or external network. If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before starting the limited live migration. See "Patching OVN-Kubernetes address ranges" for more information. * If your `openshift-sdn` cluster with Precision Time Protocol (PTP) uses the User Datagram Protocol (UDP) for hardware time stamping and you migrate to the OVN-Kubernetes plugin, the hardware time stamping cannot be applied to primary interface devices, such as an Open vSwitch (OVS) bridge. As a result, UDP version 4 configurations cannot work with a `br-ex` interface. * In most cases, the limited live migration is independent of the secondary interfaces of pods created by the Multus CNI plugin. However, if these secondary interfaces were set up on the default network interface controller (NIC) of the host, for example, using MACVLAN, IPVLAN, SR-IOV, or bridge interfaces with the default NIC as the control node, OVN-Kubernetes might encounter malfunctions. Users should remove such configurations before proceeding with the limited live migration. * When there are multiple NICs inside of the host, and the default route is not on the interface that has the Kubernetes NodeIP, you must use the offline migration instead. * All `DaemonSet` objects in the `openshift-sdn` namespace, which are not managed by the Cluster Network Operator (CNO), must be removed before initiating the limited live migration. These unmanaged daemon sets can cause the migration status to remain incomplete if not properly handled. * If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. * Like OpenShift SDN, OVN-Kubernetes resources such as `EgressFirewall` resources require `ClusterAdmin` privileges. Migrating from OpenShift SDN to OVN-Kubernetes does not automatically update role-base access control (RBAC) resources. OpenShift SDN resources granted to a project administrator through the `aggregate-to-admin` `ClusterRole` must be manually reviewed and adjusted, as these changes are not included in the migration process. + After migration, manual verification of RBAC resources is required. For information about setting the `aggregate-to-admin` ClusterRole after migration, see the example in link:https://access.redhat.com/solutions/6117301[How to allow project admins to manage Egressfirewall resources in RHOCP4]. * When a cluster depends on static routes or routing policies in the host network so that pods can reach some destinations, users should set `routingViaHost` spec to `true` and `ipForwarding` to `Global` in the `gatewayConfig` object before the migration. This will offload routing decision to host kernel. For more information, see link:https://access.redhat.com/solutions/7070870[Recommended practice to follow before Openshift SDN network plugin migration to OVNKubernetes plugin] (Red Hat Knowledgebase) and, see step five in "Checking cluster resources before initiating the limited live migration".