Migrating from Bootstrap to Operator installation Innovation Release
- Hybrid Manager dual release strategy
- Documentation for the current Long-term support release
Overview
If you previously installed Hybrid Manager using the Bootstrap method (edbpgai-bootstrap helm chart with values.yaml) and want to migrate to the Operator method (edb-hcp-operator helm chart with HybridControlPlane CR), this guide provides the necessary steps.
Critical Warning - Read This First
DO NOT use helm uninstall edbpgai-bootstrap without following this guide!
Simply uninstalling the bootstrap deletes all CRDs (Custom Resource Definitions), which will cascade to delete all your database clusters and destroy your data. Follow the CRD-preserving migration steps in this guide to avoid data loss.
Important
You cannot directly install the operator helm chart on top of an existing bootstrap installation without first removing the bootstrap deployment. Attempting to do so results in Helm ownership conflicts.
Understanding the migration challenge
The Helm ownership conflict
If you try to install the operator on top of an existing bootstrap installation, you encounter an error like this:
Release "edb-hcp-operator" does not exist. Installing it now. Error: Unable to continue with install: ServiceAccount "edb-hcp-operator-controller-manager" in namespace "edbpgai-bootstrap" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "edb-hcp-operator": current value is "edbpgai-bootstrap"
Why this happens
- The bootstrap installation creates Kubernetes resources owned by the
edbpgai-bootstrapHelm release - The operator installation tries to create resources owned by the
edb-hcp-operatorHelm release - Helm prevents one release from taking ownership of another release's resources
- Both installations use overlapping namespaces and resource names
The CRD deletion problem
Critical
DO NOT use helm uninstall edbpgai-bootstrap without preserving CRDs first!
When you uninstall the bootstrap helm chart, it deletes all Custom Resource Definitions (CRDs), which cascade to delete all your database clusters and custom resources. This must be avoided so that data is not lost.
The bootstrap installation manages CRDs (Custom Resource Definitions) that define resources like:
- HybridControlPlane
- Preflight
- Postflight
If these CRDs are deleted, Kubernetes automatically deletes all instances of these custom resources, rendering your HM platform non-functional.
Convert your configuration
Before transferring Helm ownership, you must convert your existing values.yaml into a HybridControlPlane Custom Resource manifest. The operator does not read values.yaml — it requires a CR to define the desired state of your HM installation.
Configuration mapping reference
The following table shows how values.yaml fields map to HybridControlPlane CR fields:
| values.yaml field | HybridControlPlane field | Description |
|---|---|---|
system | spec.flavour | Kubernetes distribution (eks, aks, gke, rhos, etc.) |
containerRegistryURL | spec.imageRegistry | Container registry URL |
bootstrapImageTag | spec.version | HM version to install |
imagePullSecrets | spec.imagePullSecrets[] | Image pull secret references. In values.yaml this is a list of secret names (strings). In the CR, each entry requires name and namespace fields. |
disabledComponents | spec.disabledComponents[] | Components to disable |
parameters.global.* | spec.globalParameters | Global configuration parameters |
parameters.<component>.* | spec.componentsParameters.<component> | Component-specific parameters |
beaconServer | spec.beaconServer | Beacon server configuration |
beaconAgent | spec.beaconAgent | Beacon agent configuration |
clusterGroups | spec.clusterGroups | Multi-DC cluster topology |
scenarios | spec.scenarios[] | Installation scenarios (core, migration, ai, analytics). In values.yaml this is a comma-separated string (e.g., "core,migration,ai"). In the CR, it is a YAML list. |
resourceAnnotations | spec.resourceAnnotations[] | Custom resource annotations |
source.remote | spec.source.remote | Use remote bootstrap image for manifests |
Convert using the migration script
Use the following script to automatically generate a HybridControlPlane CR from your existing values.yaml.
Prerequisites: yq (version 4+) must be installed.
#!/bin/bash # convert-values-to-hcp.sh # Converts a Bootstrap values.yaml to a HybridControlPlane Custom Resource manifest. # # Usage: ./convert-values-to-hcp.sh <values.yaml> [output-file] # If output-file is omitted, prints to stdout. set -euo pipefail VALUES_FILE="${1:?Usage: $0 <values.yaml> [output-file]}" OUTPUT_FILE="${2:-}" if [ ! -f "$VALUES_FILE" ]; then echo "Error: File '$VALUES_FILE' not found." >&2 exit 1 fi if ! command -v yq &> /dev/null; then echo "Error: 'yq' (v4+) is required. Install from https://github.com/mikefarah/yq" >&2 exit 1 fi # Build the HybridControlPlane CR using yq HCP=$(yq -n ' .apiVersion = "edbpgai.edb.com/v1alpha1" | .kind = "HybridControlPlane" | .metadata.name = "edbpgai" | .metadata.annotations."edbpgai.com/ready-for-upgrade" = "true" | .metadata.annotations."biganimal.enterprisedb.io/deletion-protect" = "enabled" ') # Basic configuration mapping SYSTEM=$(yq '.system // ""' "$VALUES_FILE") [ -n "$SYSTEM" ] && HCP=$(echo "$HCP" | yq ".spec.flavour = \"$SYSTEM\"") REGISTRY=$(yq '.containerRegistryURL // ""' "$VALUES_FILE") [ -n "$REGISTRY" ] && HCP=$(echo "$HCP" | yq ".spec.imageRegistry = \"$REGISTRY\"") VERSION=$(yq '.bootstrapImageTag // ""' "$VALUES_FILE") [ -n "$VERSION" ] && HCP=$(echo "$HCP" | yq ".spec.version = \"$VERSION\"") # Source configuration SOURCE_REMOTE=$(yq '.source.remote // ""' "$VALUES_FILE") if [ -n "$SOURCE_REMOTE" ] && [ "$SOURCE_REMOTE" = "true" ]; then HCP=$(echo "$HCP" | yq ".spec.source.remote = true") else HCP=$(echo "$HCP" | yq ".spec.source.useLocalKustomizations = true") fi # Image pull secrets PULL_SECRETS_COUNT=$(yq '.imagePullSecrets | length // 0' "$VALUES_FILE" 2>/dev/null || echo "0") if [ "$PULL_SECRETS_COUNT" -gt 0 ]; then SECRETS_JSON=$(yq -o=json '.imagePullSecrets | [.[] | select(. != null and . != "") | {"name": ., "namespace": "edbpgai-bootstrap"}]' "$VALUES_FILE") HCP=$(echo "$HCP" | yq ".spec.imagePullSecrets = $SECRETS_JSON") fi # Global parameters (parameters.global.* -> spec.globalParameters) GLOBAL_PARAMS=$(yq '.parameters.global // ""' "$VALUES_FILE") if [ -n "$GLOBAL_PARAMS" ] && [ "$GLOBAL_PARAMS" != "null" ]; then HCP=$(echo "$HCP" | yq ".spec.globalParameters = $(yq -o=json '.parameters.global' "$VALUES_FILE")") [ -n "$REGISTRY" ] && HCP=$(echo "$HCP" | yq ".spec.globalParameters.CONTAINER_REGISTRY_URL = \"$REGISTRY\"") [ -n "$VERSION" ] && HCP=$(echo "$HCP" | yq ".spec.globalParameters.HCP_VERSION = \"$VERSION\"") fi # Component-specific parameters (parameters.<component>.* -> spec.componentsParameters.<component>) COMPONENTS=$(yq '.parameters | keys | .[] | select(. != "global")' "$VALUES_FILE" 2>/dev/null || true) if [ -n "$COMPONENTS" ]; then while IFS= read -r component; do HCP=$(echo "$HCP" | yq ".spec.componentsParameters.[\"$component\"] = $(yq -o=json ".parameters.[\"$component\"]" "$VALUES_FILE")") done <<< "$COMPONENTS" fi # Beacon server configuration BEACON_SERVER=$(yq '.beaconServer // ""' "$VALUES_FILE") if [ -n "$BEACON_SERVER" ] && [ "$BEACON_SERVER" != "null" ]; then HCP=$(echo "$HCP" | yq ".spec.beaconServer = $(yq -o=json '.beaconServer' "$VALUES_FILE")") fi # Beacon agent configuration BEACON_AGENT=$(yq '.beaconAgent // ""' "$VALUES_FILE") if [ -n "$BEACON_AGENT" ] && [ "$BEACON_AGENT" != "null" ]; then HCP=$(echo "$HCP" | yq ".spec.beaconAgent = $(yq -o=json '.beaconAgent' "$VALUES_FILE")") fi # Cluster groups CLUSTER_GROUPS=$(yq '.clusterGroups // ""' "$VALUES_FILE") if [ -n "$CLUSTER_GROUPS" ] && [ "$CLUSTER_GROUPS" != "null" ]; then HCP=$(echo "$HCP" | yq ".spec.clusterGroups = $(yq -o=json '.clusterGroups' "$VALUES_FILE")") fi # Installation scenarios SCENARIOS=$(yq '.scenarios // ""' "$VALUES_FILE") if [ -n "$SCENARIOS" ] && [ "$SCENARIOS" != "null" ]; then SCENARIOS_JSON=$(echo "$SCENARIOS" | tr ',' '\n' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//' | yq -o=json '[.[] | select(. != "")]' -) HCP=$(echo "$HCP" | yq ".spec.scenarios = $SCENARIOS_JSON") fi # Disabled components DISABLED=$(yq '.disabledComponents // ""' "$VALUES_FILE") if [ -n "$DISABLED" ] && [ "$DISABLED" != "null" ]; then HCP=$(echo "$HCP" | yq ".spec.disabledComponents = $(yq -o=json '.disabledComponents' "$VALUES_FILE")") fi # Resource annotations ANNOTATIONS=$(yq '.resourceAnnotations // ""' "$VALUES_FILE") if [ -n "$ANNOTATIONS" ] && [ "$ANNOTATIONS" != "null" ]; then HCP=$(echo "$HCP" | yq ".spec.resourceAnnotations = $(yq -o=json '.resourceAnnotations' "$VALUES_FILE")") fi # Output if [ -n "$OUTPUT_FILE" ]; then echo "$HCP" > "$OUTPUT_FILE" echo "HybridControlPlane CR written to: $OUTPUT_FILE" else echo "$HCP" fi
Usage:
# Print the converted CR to stdout: chmod +x convert-values-to-hcp.sh ./convert-values-to-hcp.sh values.yaml # Write to a file: ./convert-values-to-hcp.sh values.yaml hybridmanager.yaml
Important
Review the generated manifest carefully before applying it:
spec.version: The bootstrapbootstrapImageTagmay use a different versioning scheme than the operatorspec.version.spec.imagePullSecrets: The script assumes all secrets are in theedbpgai-bootstrapnamespace. Adjust if your secrets are in a different namespace.
Once you have a validated hybridmanager.yaml, proceed with the Helm ownership transfer below. You will apply the CR after the transfer is complete.
Helm ownership transfer
This approach transfers Helm ownership of existing resources from the edbpgai-bootstrap release to the edb-hcp-operator release.
This allows the operator to adopt and manage the existing infrastructure without recreating anything.
Note
This method preserves all resources (CRDs, RBAC, webhooks, database clusters) and simply changes the Helm ownership annotations. Your database clusters will remain running throughout the migration, and the operator will seamlessly take over management.
How it works
The migration works by changing the meta.helm.sh/release-name annotation on all resources from edbpgai-bootstrap to edb-hcp-operator. This tells Helm that these resources now belong to the operator release instead of the bootstrap release.
Key benefits of this approach:
- No resource deletion - CRDs, hybridControlPlane, preflight and postflight resources remain intact
- No Helm conflicts - The operator can adopt existing resources without ownership errors
- Minimal downtime - Only the HM operator pods are affected during the transition
- Simple rollback - Can transfer annotations back if needed
- Clean migration - After verification, the bootstrap namespace can be safely removed
What gets transferred:
- Custom Resource Definitions (CRDs)
- ClusterRoles and ClusterRoleBindings
- Roles and RoleBindings in the bootstrap namespace
- ServiceAccounts
- Services
- Webhook configurations
Prerequisites
- Access to kubectl and helm CLI tools
jqcommand-line JSON processor installed
The following script performs all migration steps in sequence. You can run it as a single script or execute each step manually.
#!/bin/bash set -e echo "Starting migration from Bootstrap to Operator installation..." echo "Step 1: Deleting bootstrap deployments..." kubectl delete deployments -n edbpgai-bootstrap --all echo "Step 2: Transferring Helm ownership of ServiceAccount..." kubectl annotate sa edb-hcp-operator-controller-manager \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap echo "Step 3: Transferring Helm ownership of CRDs..." for line in $(kubectl get crd | grep edbpgai.edb.com | awk '{print $1}'); do kubectl annotate crd $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite done echo "Step 4: Transferring Helm ownership of ClusterRoles..." for line in $(kubectl get clusterroles -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate clusterrole $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite done echo "Step 5: Transferring Helm ownership of ClusterRoleBindings..." for line in $(kubectl get clusterrolebindings -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate clusterrolebinding $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite done echo "Step 6: Transferring Helm ownership of Roles..." for line in $(kubectl get roles -n edbpgai-bootstrap -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate role $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap done echo "Step 7: Transferring Helm ownership of RoleBindings..." for line in $(kubectl get rolebindings -n edbpgai-bootstrap -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate rolebinding $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap done echo "Step 8: Transferring Helm ownership of Services..." for line in $(kubectl get svc -n edbpgai-bootstrap -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate svc $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap done echo "Step 9: Transferring Helm ownership of Webhook Configurations..." kubectl annotate mutatingwebhookconfiguration \ edb-hcp-operator-mutating-webhook-configuration \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite kubectl annotate validatingwebhookconfiguration \ edb-hcp-operator-validating-webhook-configuration \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite echo "migration completed successfully! Now the operator can be installed without conflicts."
Step 1: Delete bootstrap deployments
Remove all deployments in the bootstrap namespace. This stops the bootstrap controllers:
# Delete all deployments in the bootstrap namespace kubectl delete deployments -n edbpgai-bootstrap --all ___OUTPUT___ deployment.apps "edb-hcp-operator-controller-manager" deleted deployment.apps "file-server" deleted
# Verify deployments are removed kubectl get deployments -n edbpgai-bootstrap ___OUTPUT___ No resources found in edbpgai-bootstrap namespace.
Safe Operation
This only removes the running controllers, not the CRDs or custom resources. Your hybrid manager remain untouched.
Step 2: Transfer Helm ownership - ServiceAccount
Transfer the service account ownership annotation:
# Re-annotate the service account to change Helm ownership kubectl annotate sa edb-hcp-operator-controller-manager \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap ___OUTPUT___ serviceaccount/edb-hcp-operator-controller-manager annotated
Step 3: Transfer Helm ownership - CRDs
Transfer ownership of all CRDs under edbpgai.edb.com:
# Update Helm release annotation for all edbpgai.edb.com CRDs for line in $(kubectl get crd | grep edbpgai.edb.com | awk '{print $1}'); do kubectl annotate crd $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite done ___OUTPUT___ customresourcedefinition.apiextensions.k8s.io/hybridcontrolplanes.edbpgai.edb.com annotated customresourcedefinition.apiextensions.k8s.io/postflights.edbpgai.edb.com annotated customresourcedefinition.apiextensions.k8s.io/preflights.edbpgai.edb.com annotated
# Verify the annotations were updated kubectl get crd | grep edbpgai.edb.com | head -5 | while read crd rest; do echo "CRD: $crd" kubectl get crd $crd -o jsonpath='{.metadata.annotations.meta\.helm\.sh/release-name}' echo "" done ___OUTPUT___ CRD: hybridcontrolplanes.edbpgai.edb.com edb-hcp-operator CRD: postflights.edbpgai.edb.com edb-hcp-operator CRD: preflights.edbpgai.edb.com edb-hcp-operator
Step 4: Transfer Helm ownership - ClusterRoles
Transfer ownership of ClusterRoles:
# Update Helm release annotation for ClusterRoles for line in $(kubectl get clusterroles -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate clusterrole $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite done ___OUTPUT___ clusterrole.rbac.authorization.k8s.io/create-hcp annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-hybridcontrolplane-editor-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-hybridcontrolplane-viewer-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-manager-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-metrics-auth-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-metrics-reader annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-postflight-admin-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-postflight-editor-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-postflight-viewer-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-preflight-admin-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-preflight-editor-role annotated clusterrole.rbac.authorization.k8s.io/edb-hcp-operator-preflight-viewer-role annotated
Step 5: Transfer Helm ownership - ClusterRoleBindings
Transfer ownership of ClusterRoleBindings:
# Update Helm release annotation for ClusterRoleBindings for line in $(kubectl get clusterrolebindings -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate clusterrolebinding $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite done ___OUTPUT___ clusterrolebinding.rbac.authorization.k8s.io/create-hcp annotated clusterrolebinding.rbac.authorization.k8s.io/edb-hcp-operator-manager-rolebinding annotated clusterrolebinding.rbac.authorization.k8s.io/edb-hcp-operator-metrics-auth-rolebinding annotated
Step 6: Transfer Helm ownership - Roles
Transfer ownership of Roles in the bootstrap namespace:
# Update Helm release annotation for Roles for line in $(kubectl get roles -n edbpgai-bootstrap -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate role $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap done ___OUTPUT___ role.rbac.authorization.k8s.io/edb-hcp-operator-leader-election-role annotated
Step 7: Transfer Helm ownership - RoleBindings
Transfer ownership of RoleBindings in the bootstrap namespace:
# Update Helm release annotation for RoleBindings for line in $(kubectl get rolebindings -n edbpgai-bootstrap -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate rolebinding $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap done ___OUTPUT___ rolebinding.rbac.authorization.k8s.io/edb-hcp-operator-leader-election-rolebinding annotated
Step 8: Transfer Helm ownership - Services
Transfer ownership of Services in the bootstrap namespace:
# Update Helm release annotation for Services for line in $(kubectl get svc -n edbpgai-bootstrap -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edbpgai-bootstrap") | .metadata.name'); do kubectl annotate svc $line \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite \ -n edbpgai-bootstrap done ___OUTPUT___ service/edb-hcp-operator-controller-manager-metrics-service annotated service/edb-hcp-operator-webhook-service annotated
Step 9: Transfer Helm ownership - Webhook Configurations
Transfer ownership of webhook configurations:
# Update MutatingWebhookConfiguration kubectl annotate mutatingwebhookconfiguration \ edb-hcp-operator-mutating-webhook-configuration \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite ___OUTPUT___ mutatingwebhookconfiguration.admissionregistration.k8s.io/edb-hcp-operator-mutating-webhook-configuration annotated
# Update ValidatingWebhookConfiguration kubectl annotate validatingwebhookconfiguration \ edb-hcp-operator-validating-webhook-configuration \ meta.helm.sh/release-name=edb-hcp-operator \ --overwrite ___OUTPUT___ validatingwebhookconfiguration.admissionregistration.k8s.io/edb-hcp-operator-validating-webhook-configuration annotated
Step 10: Verify ownership transfer
Verify that resources now have the correct Helm ownership:
# Check a sample of resources echo "Checking CRD ownership:" kubectl get crd | grep edbpgai.edb.com | head -1 | awk '{print $1}' | xargs -I {} \ kubectl get crd {} -o jsonpath='{.metadata.annotations.meta\.helm\.sh/release-name}' echo "" echo "Checking ClusterRole ownership:" kubectl get clusterroles -o json | \ jq -r '.items[] | select(.metadata.annotations["meta.helm.sh/release-name"] == "edb-hcp-operator") | .metadata.name' | head -3
Support
If you encounter issues during migration:
- Check the troubleshooting guide
- Contact EDB Support with migration logs and error messages
- Consult EDB Professional Services for complex migration scenarios
Next steps
After successful migration:
- Review the Operator installation guide for ongoing management
- Learn about updating HCP configuration
- See upgrading HM version using the operator
- On this page
- Overview
- Understanding the migration challenge
- Convert your configuration
- Helm ownership transfer
- How it works
- Prerequisites
- Step 1: Delete bootstrap deployments
- Step 2: Transfer Helm ownership - ServiceAccount
- Step 3: Transfer Helm ownership - CRDs
- Step 4: Transfer Helm ownership - ClusterRoles
- Step 5: Transfer Helm ownership - ClusterRoleBindings
- Step 6: Transfer Helm ownership - Roles
- Step 7: Transfer Helm ownership - RoleBindings
- Step 8: Transfer Helm ownership - Services
- Step 9: Transfer Helm ownership - Webhook Configurations
- Step 10: Verify ownership transfer
- Support
- Next steps