Uninstallation Guide

This guide describes how to uninstall Open Service Mesh (OSM) from a Kubernetes cluster. This guide assumes there is a single OSM control plane (mesh) running. If there are multiple meshes in a cluster, repeat the process described for each control plane in the cluster before uninstalling any cluster wide resources at the end of the guide. Taking into consideration both the control plane and dataplane, this guide aims to walk through uninstalling all remnants of OSM with minimal downtime.


  • Kubernetes cluster with OSM installed
  • The kubectl CLI
  • The osm CLI or the Helm 3 CLI

Remove Envoy Sidecars from Application Pods and Envoy Secrets

The first step to uninstalling OSM is to remove the Envoy sidecar containers from application pods. The sidecar containers enforce traffic policies. Without them, traffic will flow to and from Pods according in accordance with default Kubernetes networking unless there are Kubernetes Network Policies applied.

OSM Envoy sidecars and related secrets will be removed in the following steps:

  1. Disable automatic sidecar injection
  2. Restart pods
  3. Delete Envoy bootsrap secrets

Disable Automatic Sidecar Injection

OSM Automatic Sidecar Injection is most commonly enabled by adding namespaces to the mesh via the osm CLI. Use the osm CLI to see which namespaces have sidecar injection enabled. If there are multiple control planes installed, be sure to specify the --mesh-name flag.

View namespaces in a mesh:

$ osm namespace list --mesh-name=<mesh-name>
<namespace1>       <mesh-name>    enabled
<namespace2>       <mesh-name>    enabled

Remove each namespace from the mesh:

$ osm namespace remove <namespace> --mesh-name=<mesh-name>
Namespace [<namespace>] successfully removed from mesh [<mesh-name>]

Alternatively, if sidecar injection is enabled via annotations on pods instead of per namespace, please modify the pod or deployment spec to remove the sidecar injection annotation.

Restart Pods

Restart all pods running with a sidecar:

# If pods are running as part of a Kubernetes deployment
# Can use this strategy for daemonset as well
$ kubectl rollout restart deployment <deployment-name> -n <namespace>

# If pod is running standalone (not part of a deployment or replica set)
$ kubectl delete pod <pod-name> -n namespace
$ k apply -f <pod-spec> # if pod is not restarted as part of replicaset

Now, there should be no OSM Envoy sidecar containers running as part of the applications that were once part of the mesh. Traffic is no longer managed by the OSM control plane with the mesh-name used above. During this process, your applications may experience some downtime as all the Pods are restarting.

Delete Envoy Bootsrap Secrets

Once the sidecar is removed, there is no need for the Envoy bootstrap config secrets OSM created. These are stored in the application namespace and can be deleted manually with kubectl. These secrets have the prefix envoy-bootstrap-config followed by some unique ID: envoy-bootstrap-config-<some-id-here>.

Resource Management

Uninstall OSM Control Plane and Remove User Provided Resources

The OSM control plane and related components will be uninstalled in the following steps:

  1. Uninstall the OSM control plane
  2. Remove User Provided Resources
  3. Delete OSM Namespace

Uninstall the OSM control plane

Use the osm CLI to uninstall the OSM control plane from a Kubernetes cluster. The following step will remove:

  1. OSM controller resources (deployment, service, config map, and RBAC)
  2. Prometheus, Grafana, Jaeger, and Fluentbit resources installed by OSM
  3. Mutating webhook and validating webhook
  4. The conversion webhook fields patched by OSM to the CRDs installed/required by OSM: CRDs for OSM will be unpatched. Refer to Removal of OSM Cluster Wide resources for more details

Run osm uninstall:

# Uninstall osm control plane components
$ osm uninstall --mesh-name=<mesh-name>
Uninstall OSM [mesh name: <mesh-name>] ? [y/n]: y
OSM [mesh name: <mesh-name>] uninstalled

Run osm uninstall --help for more options.

Alternatively, if you used Helm to install the control plane, run the following helm uninstall command:

$ helm uninstall <mesh name> --namespace <osm namespace>

Run helm uninstall --help for more options.

Remove User Provided Resources

If any resources were provided or created for OSM at install time, they can be deleted at this point.

For example, if Hashicorp Vault was deployed for the sole purpose of managing certificates for OSM, all related resources can be deleted.

Delete OSM Namespace

When installing a mesh, the osm CLI creates the namespace the control plane is installed into if it does not already exist. However, when uninstalling the same mesh, the namespace it lives in does not automatically get deleted by the osm CLI. This behavior occurs because there may be resources a user created in the namespace that they may not want automatically deleted.

If the namespace was only used for OSM and there is nothing that needs to be kept around, the namespace can be deleted at this time with kubectl.

$ kubectl delete namespace <namespace>
namespace "<namespace>" deleted

Repeat the steps above for each mesh installed in the cluster. After there are no OSM control planes remaining, move to following step.

Removal of OSM Cluster Wide resources

OSM ensures that all the CRDs mentioned here exist in the cluster at install time. If they are not already installed, the osm-bootstrap pod will install them before the rest of the control plane components are running. This is the same behavior when using the Helm charts to install OSM as well. Uninstalling OSM will only remove/un-patch the conversion webhook fields from all the CRDs (which OSM adds to support multiple CR versions) and will not deleted them for primarily two reasons: 1. CRDs are cluster-wide resources and may be used by other service meshes running in the same cluster, 2. deletion of a CRD will cause all custom resources corresponding to that CRD to also be deleted.

If there are no other service meshes running in the same cluster and the required custom resources have been backed up, the CRDs can be removed from the cluster using kubectl.

Run the following kubectl commands:

kubectl delete crd meshconfigs.config.openservicemesh.io
kubectl delete crd multiclusterservices.config.openservicemesh.io
kubectl delete crd egresses.policy.openservicemesh.io
kubectl delete crd ingressbackends.policy.openservicemesh.io
kubectl delete crd httproutegroups.specs.smi-spec.io
kubectl delete crd tcproutes.specs.smi-spec.io
kubectl delete crd traffictargets.access.smi-spec.io
kubectl delete crd trafficsplits.split.smi-spec.io