Gathering your system requirements Innovation Release

Overview

Role focus: Infrastructure engineer / CSP Administrator / Platform Engineer

Prerequisites

Outcomes

  • A validated high-level inventory of compute, network, and storage infrastructure to be used in Phase 3: Deploying your Kubernetes cluster

  • A number of requirements validated as ready for use in Phase 4 Preparing your environment and a number of inital inputs defined for the HM Helm chart values.yaml to start this file now, continue adding to it from Phase 1 if already started, or to use to populate the values.yaml at the end of Phase 4 (Preparing your environment).

Note

EDB's Sales Engineering and Professional Services are the primary resources for communicating and clarifying detailed deployment requirements during the sales cycle or via a Statement of Work (SoW). The official documentation also provides comprehensive checklists.

Next phase: Phase 3: Deploying your Kubernetes cluster

Connection to architecture

The requirements listed below for Phase 2 are not all generic; many are direct consequences of the decisions made in Phase 1: Planning your architecture:

  • Locality decisions determine your latency requirements between the Control Plane and Data Plane.
  • Disaster recovery goals determine the need for specific Object Storage configurations (for replication).
  • Activeness (Active/Active) determines if you need the specialized networking required for Postgres Distributed.
Important

While this document serves as your comprehensive technical checklist, EDB Professional Services is the primary resource for validating complex deployment architectures. If your "Target state" from Phase 1 involves multi-region redundancy or high-scale requirements, we recommend clarifying these detailed requirements via a Statement of Work (SoW) before procuring infrastructure.

Deployment readiness checklist

Use this checklist to verify at a high level that you have all necessary infrastructure components available.

1. Management workstation (Bastion host)

Optionality: Strict requirement.

You require a designated machine to orchestrate the deployment. This can be a laptop (for public clusters) or a cloud-based Bastion host (for private clusters).

Note

A Bastion host (or jump host/box) is strongly recommended for secure, private access to the Kubernetes API, Portal, and PostgreSQL endpoints. This host serves as your dedicated operational workstation for installation and validation activities. For EDB Remote DBA (RDBA) or Managed Services contracts, a dedicated Bastion is mandatory.

General requirements

  • Operating System: Linux (AMD64/ARM64) or macOS.

    • Note: Windows users must use WSL2 (Windows Subsystem for Linux).
  • Network Access:

    • Kubernetes API: Must have network connectivity to the Cluster API Server (typically port 6443).
    • Internet/Registry: Must be able to reach the Container Registry to pull Helm charts and images.
  • Storage: Minimal (enough to hold configuration files and certificates).

Required tooling inventory

You do not need to install these tools in this phase. However, you must ensure your workstation environment allows for the installation and execution of the following binaries in the next phase (Phase 3):

Core installation tools

  • edbctl (EDB Hybrid Manager CLI)
  • kubectl (Kubernetes CLI) + cnpg plugin
  • helm (Kubernetes package manager)
  • yq (Command-line YAML processor)

Utilities

  • curl (Download utility)
  • openssl (Certificate management)
  • htpasswd (Required only if using Static Users)

Platform CLIs (Environment dependent)

  • aws (AWS CLI)
  • gcloud (Google Cloud CLI)
  • oc (OpenShift CLI)

Operational tools (Post-install)

  • pgdcli (Postgres Distributed management)
  • sentinel (Monitoring and failover management)

2. Kubernetes platform verification

Ensure the cluster you provision matches the platform selected in the Planning your architecture phase.

Supported distributions:

  • Cloud service provider platforms:

    • Microsoft Azure AKS (IR release stream only)
    • Amazon EKS
    • Google GKE
    • ROSA RedHat OpenShift Service on AWS (IR release stream only)
  • On premises platforms:

    • SUSE Rancher RKE2
    • OCP RedHat OpenShift (RHOS)

For a more detailed overview of HM compatibility per LTS/IR version, see Hybrid Manager platform compatibility.

Provisioning constraints:

  • Dedicated cluster: The cluster must be dedicated to Hybrid Manager. Multi-tenanting with other workloads is not supported.

  • Relationship: 1:1 (One HM deployment per one Cluster).

  • Lifecycle: You are responsible for the provisioning, upgrading, and scaling of the Kubernetes layer.

Configuration dependencies

Your choice of platform dictates specific values in the values.yaml file.

Record these now for the next phase:

ParameterYAML KeyRequired Value
System flavorsystemeks, gke, aks, rke2, or rhos
OpenShiftbeaconAgent.provisioning.openshiftSet to true only if using Red Hat OpenShift.
OpenShift ingress domainparameters.global.default_ingress_domainRequired only for system: "rhos" (query via oc get ingresses.config/cluster -o jsonpath={.spec.domain}).
OpenShift console domainparameters.upm-istio-gateway.openshift_console_domain_nameRequired only for OpenShift; used for console linking and UX.

3. Compute (Node requirements)

Ensure your worker nodes meet the minimum sizing for your intended deployment topology (Minimum vs. Fully featured) as defined in the Planning Architecture phase.

Hybrid Manager components require AMD64/x86-64 nodes. Mixed-architecture clusters (ARM64 + AMD64) are not supported for HM, even though PostgreSQL itself can run on ARM64.

Node roles summary

Node typePurposeCountKubernetes nodesRequired label
Control planeRuns HM control plane & telemetry.3+Control or Workeredbaiplatform.io/control-plane: "true"
Data planeRuns Postgres databases.0 or 3+Workeredbaiplatform.io/postgres: "true"
AI model (GPU)Optional: Runs AI/ML workloads.0 or 2+Workernvidia.com/gpu: "true"

Use Node Pools (e.g., AWS NodeGroups, RHOS Machine Sets) to manage resources and apply required labels/taints.

In most cases, HM runs on Kubernetes worker nodes in cloud environments (since you do not have access to managed control nodessee EKS control plane access), and can run on control nodes (see Kubernetes control plane documentation) or worker nodes for on-premises clusters.

3.1 Control plane nodes

Optionality: Strict requirement.

Resource requirements for the control plane increase with the number of databases monitored. (Note: The Telemetry stack scales with the monitoring load).

Sizing guidelines:

ResourceMinimum (Up to 10 DBs)Standard (10–50 DBs)Large (>50 DBs)
CPU8 vCPUs16 vCPUs16+ vCPUs
Memory32 GB RAM64 GB RAM64+ GB RAM
Disk100 GB SSD200 GB SSD>200 GB SSD
Quantity3 nodes3 nodes3+ nodes

3.2 Data plane nodes

Optionality: Strict requirement.

The data plane nodes must be sized to host the PostgreSQL database clusters provisioned by users.

  • Sizing: Depends entirely on expected database workloads, storage requirements, and performance SLAs as defined in Phase 1.

3.3 AI model nodes

Optionality: Required only if utilizing GenAI/AI Factory capabilities (Installation scenario ai is enabled).

  • Sizing: Current recommendations require Nvidia B200 GPUs (or equivalent supported hardware).

Node abstractions

To ensure the HM Control Plane runs on dedicated resources and Postgres workloads are on independent resources, or to partition dedicated resources for your AI workloads, you must apply specific configurations to each of these node pools before deploying your Kubernetes cluster.

4. Local networking

Local network requirements

HM is not strictly opinionated about local networking logic. The standard networking capabilities provided by major Cloud Service Providers (AWS, Azure, GCP) are sufficient, provided they offer low latency and high bandwidth between availability zones.

  • On-premises: Networking should follow best practices regarding switching, link bonding, and link redundancy to ensure high availability.

  • Protocol: IPv4 only. Kubernetes must be configured for IPv4. IPv6 has not been thoroughly productized for Hybrid Manager; if IPv6 is required, it must be masked behind a Load Balancer so that internal communication remains IPv4.

  • Defined Network Address Spaces (CIDRs): Your cluster must be configured with distinct, non-overlapping IP ranges for:

    • Pod Network (clusterCIDR): The IP range from which all pods are assigned their IPs.
    • Service Network (serviceCIDR): The IP range from which all internal services (ClusterIPs) are assigned their virtual IPs.
    • Functional DNS and NTP: The cluster's internal DNS service (typically CoreDNS) must be running and able to resolve both internal cluster services (e.g. kubernetes.default) and external addresses.

Container Network Interface (CNI)

A functional CNI is a strict requirement for pod-to-pod networking.

  • Examples: Calico, Cilium, Flannel

  • Context: Cilium is known to be more efficient at very high numbers of pods. However, because Postgres is the center of Hybrid Manager, extreme pod density is rarely the bottleneck compared to database I/O. Extra effort to optimize the stack to acheive slightly better results. Depending on the customers use case it may be justified. However, more often than not, good solution design can adapt to most infrastructure limitations.

  • Validation: Verify CNI setup and CIDR assignment:

kubectl get nodes -o custom-columns=NAME:.metadata.name,PODCIDR:.spec.podCIDR
kubectl get svc

5. Ingress

You must choose an ingress strategy that matches your infrastructure capabilities. The table below defines the supported combinations of DNS Management, Endpoint Types, and SSL Termination.

ScenarioDNS ManagementEndpoint TypePortal SSL/TLS Termination
CSP LB (ELB, ALB)Manual portal, dynamic postgresDynamic via Load Balancer ControllerIstio Ingress Gateway
F5 + F5 K8s ControllerManual portal, dynamic postgresDynamic via Load Balancer ControllerIstio Ingress Gateway
MetalLBManualDynamic via Load Balancer ControllerIstio Ingress Gateway
NodePort (Conventional LB)Manual (A record against LB)Static (manual)Istio Ingress Gateway
NodePort (No LB)Manual (Round Robin against Nodes)Static (manual)Istio Ingress Gateway

5.1 Load balancer controller

Optionality: Required option for an optimal experience**.

Examples of load balancer controllers range from the AWS Load Balancer controller to MetalLB or a BigIP F5 combined with DNS capability and their F5 Load Balancer Controller.

General requirements

  • TCP: The load balancer must be configured for TCP passthrough (TCP-only) and must not terminate SSL. SSL is terminated by the HM Ingress Gateway.

  • Firewall Annotations: If using a specific provider scheme (e.g., AWS Internet Facing), you know the correct resourceAnnotations to provision the load balancer correctly:

AWS example annotation:

resourceAnnotations:
  - name: istio-ingressgateway
    kind: Service
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing

Configuration dependencies

ParameterYAML KeyValue / Format
Enable LBparameters.global.load_balancer_mode"public", "private" or "disabled"
Providerparameters.global.load_balancer_provideraws, azure, or gcp
AnnotationsresourceAnnotationsProvider-specific list (see below).

Spot validation

To verify that your controller is active, use the following command:

kubectl get pods -A | grep -i 'load\|lb\|router\|metallb\|gateway'

HM control plane ports

The following are the Hybrid Manager Control Plane ports:

PortProtocolDescription
443HTTPSHM Portal (HTTPS ingress)
8444TCPHM internal API
9443gRPCBeacon gRPC API
9445TCPSpire TLS

Postgres ports (Data Plane)

With a proper load balancer setup, the following are Postgres ports:

PortProtocolDescription
5432TCPDefault PSQL
6432TCPPGD Connection Manager PSQL

5.1.2 NodePort alternative

Optionality: Alternative to Load Balancer controller.

If a Load Balancer is not available, you must use a NodePort strategy.

  • Configuration: You must configure parameters.global.load_balancer_mode: "disabled" and define a parameters.global.node_port_domain in your HM Helm chart values.yaml configuration.

  • Context: By default, HM components expose services on specific NodePort values. You may use NodePort directly, or front these ports with MetalLB (software load balancer) or a hardware load balancer for a friendlier DNS name and better failover.

Default NodePort assignments

If using NodePort for ingress, the following specific ports must use NodePort.

  • Constraint: You must define a node_port_domain (Base DNS name).

  • Context: HM components expose services on specific NodePort values. You may use NodePort directly, or front these ports with a hardware load balancer.

Configuration dependencies

ParameterYAML KeyValue / Format
Disable LBparameters.global.load_balancer_mode"disabled"
Domainparameters.global.node_port_domainPostgres DNS domain (e.g., pg.myorg.com).

Default NodePort assignments

NodePort VariablePortDescription
parameters.upm-istio-gateway.ingress_http_node_port32542HM Portal (HTTP)
parameters.upm-istio-gateway.ingress_https_node_port30288HM Portal (HTTPS)
parameters.upm-istio-gateway.ingress_grpc_tls_node_port30290Beacon gRPC
parameters.upm-istio-gateway.ingress_spire_tls_node_port30292Spire TLS
parameters.upm-istio-gateway.ingress_beacon_spire_tls_node_port30294Beacon Spire TLS
parameters.upm-istio-gateway.ingress_thanos_query_tls_node_port30296Thanos-query TLS (inter-cluster metrics)
parameters.upm-istio-gateway.ingress_fluent_bit_tls_node_port30298Fluent-bit TLS (inter-cluster logs)
parameters.upm-istio-gateway.enable_server_sessionn/aEnable server stored session ("true" or "false")

Postgres NodePorts

Postgres clusters use the node_port_domain and increment on the port, provisioned in the port range 30000+.

5.2 DNS

Optionality: DNS configuration is a required option for an optimal experience.

You must ensure proper DNS resolution both inside the cluster and for external access.

Internal cluster DNS & NTP

  • CoreDNS: The cluster's internal DNS service (typically CoreDNS) must be running and able to resolve both internal cluster services (e.g., kubernetes.default) and external internet addresses.

  • NTP: Functional NTP is required to ensure time synchronization across nodes.

  • Container Registry: Your DNS service must be able to resolve the DNS entries for the container registry (whether local or EDB's).

Configuration dependencies

You must provision and control the following domains. These are inputs into your configuration in the HM Helm chart values.yaml in the next phase:

Domain PurposeYAML keyDescription
Portal domainparameters.global.portal_domain_nameThe host name for the HM Portal UI.
Agent domainparameters.upm-beacon.server_hostThe host name through which the Beacon Server API is reachable.
Migration domainparameters.global.dms_domain_nameThe internal domain for the Transporter migration service. Required unless the migration installation scenario isn't enabled.
NodePort baseparameters.global.node_port_domain(Scenario B Only) The base domain for Postgres access if using NodePort.

In case of NodePort approach, the above DNS entries should be round robin on the compute IP where the control-plane label is applied on the Kubernetes Nodes.

DNS configuration strategy

Your DNS configuration depends on your Ingress selection.

Scenario A: Load balancer (recommended)

  • Dependency: Ensure parameters.global.load_balancer_mode: is set to "public" or "private" in the values.yaml.

  • Portal: In the case of an optimal load balancer controller like AWS ELB or an F5 with DNS capability, manually configure DNS to point to the resulting Load Balancer IP for istio-ingress. Domain names must be resolvable to a locally routable IP so that the ingress service on Kubernetes can properly route traffic according to the hostname in the HTTPS request.

  • Postgres: You do not need to manually manage DNS for every database.

Postgres DNS records are populated automatically as a function of the load balancer controller updating the status of the Service in the namespace dedicated to that Postgres cluster.

Scenario B: NodePort alternative

  • Prerequisite: You disabled load balancers and defined a node_port_domain in the Ingress section.

  • Portal: DNS entries should be a Round Robin A Record pointing to the compute IPs of the Control Plane nodes (nodes labeled edbaiplatform.io/control-plane).

  • Postgres: The node_port_domain value is used as the base URL for all Postgres instances. It should be a Round Robin DNS record pointing to the IP addresses of the Worker nodes where the Postgres clusters are running.

5.3 Certificate management

Optionality: Stictly requiured.

Secure communication requires TLS certificates for the Portal and API endpoints. You must decide on a certificate management strategy before installation.

Decision matrix

OptionDescriptionTypical use case
A. Custom cert-manager issuerUse an existing cert-manager Issuer.Production (Recommended).
B. Customer CABring your own CA to sign internal certs.Enterprises with internal PKI.
C. Customer CertificateProvide your own pre-generated x.509 cert and private key as a Kubernetes secret.Specific organization-issued cert.
D. Self-signedInstaller generates self-signed certs.Test/Non-Production only.

Configuration dependencies by strategy

StrategyYAML keyDescription
A. Cert Managerparameters.global.portal_certificate_issuer_kindRequired. The Resource Kind (Issuer or ClusterIssuer).
A. Cert Managerparameters.global.portal_certificate_issuer_nameRequired. The Resource Name (e.g., letsencrypt-prod).
B. BYO CAparameters.global.ca_secret_nameRequired. The name of your CA Secret.
C. BYO Certparameters.global.portal_certificate_secretRequired. The name of your Certificate Secret.
D. Self-signedN/ADefault behavior. (No configuration required).

6. Block storage (Database & logs)

Optionality: Strictly required.

You must define a Kubernetes StorageClass that presents persistent storage volumes (PVCs).

The configuration in values.yaml defines the default class used for the HM Control Plane. The same storage class is an option for all Postgres instances' deployments as well. However, for the Data Plane (your Postgres deployments), you may establish any number of additional storage classes in the cluster. All of them are available options for Postgres deployments, providing the flexibility needed for critical or intensive workloads.

Control plane requirements

Block storage demands for the HM Control Plane are relatively light. A storage class roughly equivalent to the capability of AWS gp2 is sufficient.

Data plane requirements (Postgres)

Postgres workloads may require specialized configurations depending on their specific I/O patterns.

  • Latency Sensitivity: Postgres is generally more sensitive to latency—and specifically the consistency of latency—than raw throughput.

  • Example: Analytics: Analytic capabilities perform as expected with approximately 16,000 IOPS capacity per PVC.

  • Example: Azure: In Azure, IOPS-capable storage (Premium SSD) is highly preferred—even when provisioning low IOPS capacity—due to its highly consistent latency compared to Standard tiers.

Notable considerations

Hybrid Manager and containerized Postgres are particularly well suited to leverage Local Storage. Unlike traditional applications, these solutions do not strictly depend on the PVC being replicated across the Kubernetes cluster by the storage provider. If a Postgres instance loses its underlying node (and local disk), the new pod does not need to remount the old PVC. Instead, the Hybrid Manager High Availability (HA) logic rebuilds the new node from its peers or backup.

Configuration dependencies

Configuration dependencies

ParameterYAML KeyValue / Format
Storage Classparameters.global.storage_classThe exact name of your K8s StorageClass.
Validation

Verify available storage classes:

kubectl get sc

6.1 Volume snapshots

  • Optionality: Required for an optimal experience (backups will be limited without it).

The underlying storage provider must support Volume Snapshots to enable the backup and recovery features of the platform.

  • Mechanism: Hybrid Manager automatically utilizes volume snapshots if the capability is exposed via the Kubernetes API.

  • Implementation note:

    • Cloud: Typically handled by the CSP's CSI driver (e.g., AWS EBS CSI), but often requires installing a "Snapshot Controller" add-on.
    • On-Premises: Your storage backend must support the CSI Snapshotter specification, and you must define a VolumeSnapshotClass that maps to it.
Validation

Verify that a snapshot class is configured and available:

kubectl get volumesnapshotclass

7. Object storage

Optionality: Strictly required.

Object storage is the backbone of disaster recovery and data mobility. It is used for:

  • Velero backups (Cluster DR)
  • Postgres WAL archiving (Point-in-Time Recovery)
  • Unstructured data for GenAI/AIDB
  • Parquet files for Lake Keeper
  • Log files

General requirements

  • Protocol: Must be fully S3 Compatible.

  • Consistency: In multi-location deployments, the object storage configuration must be available equally across all locations. (See multi-dc guidance for details.)

Note

You must provision a Kubernetes Secret named edb-object-storage in the default namespace. The contents of this secret must be identical in every cluster. You are not expected to run kubectl create secret at this point in the process. Here, you are gathering the raw credentials (API keys, certificates, passwords) so you have them ready to inject into the cluster in Phase 4: Preparing your environment.

Important

Your object storage must be dedicated entirely to the Hybrid Manager. Any prior data existing in your object storage interferes with the successful operation of the HM. Similarly, if HM is removed and reinstalled, then the object storage must be emptied prior to beginning the new installation.

Authentication strategies

Choose the authentication method supported by your provider:

ProviderAuth MethodExample Use Case
AWS (EKS/ROSA)Workload Identity (IAM)Native AWS integration (Recommended).
AWS (Other K8s)Static KeysGeneric Kubernetes connecting to AWS S3.
Azure BlobStatic KeysConnect via Connection String/Key.
GCP StorageStatic KeysBase64-encoded Service Account JSON.
S3 CompatibleStatic KeysOn-Premises (MinIO, Ceph, etc.).

Configuration dependencies

Configuration file

These are the keys that are affected in the values.yaml you create in the Phase 3.

8. Container registry

Optionality: A local container registry is a required option for an optimal experience.

You require a container registry that is accessible from your Kubernetes Cluster to host HM and Postgres images.

  • Purpose: To serve the specific application images required for the Control Plane and Data Plane.

  • Access Requirement: The cluster must be able to pull images from this registry.

  • Permissions: The authentication credential used must have permissions to List Repositories, List Tags, and Read Tag Manifests.

If your environment cannot access the public EDB repository, you must pull the artifacts and push them to your local private registry using edbctl.

Supported providers & authentication

Registry providerSupported AuthRecommended Auth
Azure (ACR)Token, BasicToken
Amazon (ECR)EKS Managed IdentityEKS Managed Identity
Google (GAR)Token, BasicBasic
EDB RepoTokenToken (PoC/Pilot only!)
# Examples: Azure CR: myregistry.azurecr.io, AWS ECR: 123456079902.dkr.ecr.us-east-1.amazonaws.com,
#           GCP GAR: us-east1-docker.pkg.dev, EDB CloudSmith Repos 2.0: docker.enterprisedb.com/pgai-platform

Preparation

  • Sync images to your local container registry: You must use edbctl to synchronize images if using a private registry.

  • Secret: You must provision a Kubernetes Secret with pull credentials. (See Phase 4: Preparing your environment for implementation details.)

Note

Configure image discovery: Once images are synced to your local registry, you may need to configure the installer to locate them. This ensures the cluster pulls artifacts from the correct location (local registry vs. public registry).

Configuration file

You need to collect these specific values to populate your values.yaml in the next phase (Phase 3).

ParameterYAML KeyValue / Format
Bootstrap ImagebootstrapImageName<Registry Domain>/pgai-platform/edbpgai-bootstrap/bootstrap-<K8s_Flavor>
Bootstrap TagbootstrapImageTagHM/Platform version tag (for example, "v2026.4.0").
Global RegistrycontainerRegistryURL<Registry Domain>/pgai-platform
Pull SecretimagePullSecretsArray of Kubernetes image pull secret names (for example, ["edb-cred"]).
Discovery RegistrybeaconAgent.provisioning.imagesetDiscoveryContainerRegistryURL<Registry Domain>/pgai-platform
Discovery AuthbeaconAgent.provisioning.imagesetDiscoveryAuthenticationType<Auth Type> (e.g., token, basic, eks_managed_identity)
TLS ValidationbeaconAgent.provisioning.imagesetDiscoveryAllowInsecureRegistrySet to true only if intentionally allowing insecure (no/invalid TLS) registry discovery.

9. Identity provider (IdP)

Hybrid Manager is not intended to provide user authentication directly. It relies on an external Identity Provider (IdP) to manage user access securely.

Optionality: Required for optimal experience and security.

  • Mechanism: Connects to your existing directory service via OIDC.
  • Supported Backends: LDAP or SAML.

You must prepare the following values and artifacts for Preparing your environment (Phase 4):

10. Key management service (KMS)

Integrating an external Key Management Service (KMS) is required to enable Transparent Data Encryption (TDE) for your Postgres databases.

  • Optionality: Required for optimal security and compliance.

  • Mechanism: Hybrid Manager integrates with cloud providers or external vaults to manage encryption keys securely.

Supported Authentication

  • Workload Identity: Native cloud authentication (Recommended for EKS/GKE/AKS).

  • Credentials: Static keys stored in Kubernetes secrets.

Configuration dependencies

Configuration file

You must prepare the following for the values.yaml for Phase 4:

ParameterYAML KeyValue / Format
TDE ProvidersbeaconAgent.transparentDataEncryptionMethodsList of enabled methods (e.g., passphrase, aws_kms, azure_kms, gcp_kms, hashicorp_vault).
  • Auth Secrets: If using credentials mode, specific Kubernetes Secrets must be provisioned with API keys.

Example values.yaml

According to the decisions made in previous steps, your values.yaml, if you have already started it, should reflect those choices.

Here is an example of a production oriented values.yaml for installation which using all options "required for optimal experience".
If you are starting now, or have started your values.yaml already, your configuration file may be different if you are not employing the options "required for optimal experience above.

system: <Kubernetes>
bootstrapImageName: <Container Registry Domain>/pgai-platform/edbpgai-bootstrap/bootstrap-<Kubernetes>
bootstrapImageTag: <image-tag-version>
containerRegistryURL: "<Container Registry Domain>/pgai-platform"
scenarios: "core,migration,ai,analytics"
parameters:
  global:
    portal_domain_name: <Portal Domain>
    storage_class: <Block Storage>
    portal_certificate_issuer_kind: <ClusterIssuer>
    portal_certificate_issuer_name: <my-issuer>
    dms_domain_name: <Migration Domain>
    load_balancer_mode: "public"
    load_balancer_provider: <cloud_provider>
  upm-beacon:
    beacon_location_id: "<"Location>"
    server_host: <Agent Domain>
beaconAgent:
  provisioning:
    imagesetDiscoveryAuthenticationType: <Authentication Type for the Container Registry>
    imagesetDiscoveryContainerRegistryURL: "<Container Registry Domain>/pgai-platform"
    imagesetDiscoveryAllowInsecureRegistry: false
  transparentDataEncryptionMethods:
    - <available_encryption_method>
resourceAnnotations:
  - name: istio-ingressgateway
    kind: Service
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-scheme: internal
      service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/8  

Next phase

You now have a complete inventory and deployment strategy, as well as some inputs to your HM Helm chart values.yaml to be added now if you are building your file as you go, or used as inputs to build the file at the end of Phase 4: Preparing your environment.

Proceed to Phase 3: Deploying your Kubernetes cluster → to provision and deploy your Kubernetes clsuter.