A Step-by-Step Guide to deployment with the official Kubernetes Operator (Part 1)
Introduction
AlloyDB Omni brings the performance and intelligence of Google Cloud’s fully-managed PostgreSQL-compatible database, AlloyDB, right to your own infrastructure. While a simple AlloyDB Omni Docker container is great for a quick test, deploying it with the official AlloyDB Omni Kubernetes Operator on a Kubernetes (K8s) cluster is the ideal way to setup up a production-ready, cloud-native environment.
Do you have a K8s cluster? Jump right to “Step-by-Step Deployment” section.
Don’t have a K8s cluster? No problem! You can get your hands dirty on minikube right on your local Ubuntu machine.
minikube’s purpose: “minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.”
This guide walks you through the prerequisites and the step-by-step process of getting AlloyDB Omni up and running on Kubernetes.
Prerequisites
Kubernetes cluster running version 1.21 or newer.
Assuming you have kubectl and helm installed on your client machine, you can jump to “Step-by-Step Deployment” section below
You don’t have a K8s cluster? No problem! Let us get you a toy K8s on your Ubuntu machine.
You will need a decent sized Ubuntu machine to run a minikube cluster locally.
Here’s how to determine Ubuntu version on your machine:
❯ lsb_release -d
Hardware Requirements
- CPU: x86–64 or Arm CPU with AVX2 support.
- RAM: Minimum 8 GB of RAM (The cluster node requires at least 8Gi).
- Disk Space: 10 GB of available disk space or more.
Software & Tools
Before starting, ensure you have the following installed on your Ubuntu system:
- Linux Kernel: 4.18 or later.
- cgroup v2: Control group v2 must be enabled.
- Docker: Required as the container runtime for minikube.
- minikube: The tool for running a local Kubernetes cluster.
- kubectl: The Kubernetes command-line tool.
- helm: The package manager for Kubernetes.
Feeling overwhelmed?
You can run the commands in the following script or download and run this entire script on your machine and it’ll tell you if you’re ready to go!
Simple, public, safe script to perform checks on pre-requisites
If you prefer, you can copy the contents of above gist into a file and give it execute permissions to run it. In this example, I’m calling that file /tmp/alloydbomni_preflight_check.sh
❯ chmod +x /tmp/alloydbomni_preflight_check.sh
❯ /tmp/alloydbomni_preflight_check.sh
==========================================
System Prerequisite Check for AlloyDB
==========================================
--- Checking Hardware ---
[ PASS ] CPU is x86-64 and supports AVX2
[ PASS ] RAM: 236GB detected (Minimum 8GB met)
[ PASS ] Disk Space: 2799GB available (Minimum 10GB met)
--- Checking Kernel & OS ---
[ PASS ] Kernel Version: 6.16.12 (>= 4.18 met)
[ PASS ] Cgroup v2 is enabled
--- Checking Software Tools ---
[ FAIL ] Docker is NOT installed
[ FAIL ] Minikube is NOT installed
[ FAIL ] Kubectl is NOT installed
[ FAIL ] Helm is NOT installed
==========================================
Check Complete.
Install the prerequisites mentioned above
Need a helping hand?
Here’s another script
In this example, I’m calling that file /tmp/alloydbomni_prereqs.sh
❯ chmod +x /tmp/alloydbomni_prereqs.sh
❯ /tmp/alloydbomni_prereqs.sh install
Starting Installation Sequence...
.
[OK] Docker installed.
.
[OK] Minikube installed.
.
[OK] Kubectl installed.
.
[OK] Helm installed.
All tools installed successfully!
IMPORTANT: Run the following command right now to activate Docker permissions:
newgrp docker
Verify the prerequisites again. Eg:
❯ /tmp/alloydbomni_preflight_check.sh
==========================================
System Prerequisite Check for AlloyDB
==========================================
--- Checking Hardware ---
[ PASS ] CPU is x86-64 and supports AVX2
[ PASS ] RAM: 236GB detected (Minimum 8GB met)
[ PASS ] Disk Space: 2796GB available (Minimum 10GB met)
--- Checking Kernel & OS ---
[ PASS ] Kernel Version: 6.16.12 (>= 4.18 met)
[ PASS ] Cgroup v2 is enabled
--- Checking Software Tools ---
[ PASS ] Docker is installed
[ PASS ] -> Docker service is running
[ PASS ] Minikube is installed
[ PASS ] Kubectl is installed
[ PASS ] Helm is installed
==========================================
Check Complete.
Set Up and Start minikube
First, you need to create and start your minikube cluster, ensuring it meets the resource requirements for the AlloyDB Omni node (minimum 2 CPUs and 8GB RAM). Don’t forget to ensure you ran: newgrp docker
minikube start --cpus 2 --memory 8192mb
Eg:
# Start minikube with sufficient resources
❯ minikube start --cpus 2 --memory 8192mb
😄 minikube v1.37.0 on Debian rodete (amd64)
✨ Automatically selected the docker driver. Other choices: none, ssh
📌 Using Docker driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48 ...
💾 Downloading Kubernetes v1.34.0 preload ...
> preloaded-images-k8s-v18-v1...: 337.07 MiB / 337.07 MiB 100.00% 237.71
> gcr.io/k8s-minikube/kicbase...: 488.51 MiB / 488.52 MiB 100.00% 234.16
🔥 Creating docker container (CPUs=2, Memory=8192MB) ...
🐳 Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Step-by-Step Deployment
Step 1: Install cert-manager
The AlloyDB Omni Operator requires cert-manager to manage the TLS certificates it uses. You can do a static install using kubectl as mentioned in this page: cert-manager installation docs.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml
Verify the cert manager is running successfully.
kubectl get all -n cert-manager
Step 2: Install the AlloyDB Omni Kubernetes Operator
The operator manages the entire lifecycle of the AlloyDB Omni database cluster. As of this writing, the latest version is 1.6.0. You can install it using Helm (ignore the warnings).
export HELM_PATH=$(curl https://storage.googleapis.com/alloydb-omni-operator/latest)
export OPERATOR_VERSION="${HELM_PATH%%/*}"
curl -X GET -o "./alloydbomni-operator-${OPERATOR_VERSION}.tgz" "https://storage.googleapis.com/storage/v1/b/alloydb-omni-operator/o/$(echo ${HELM_PATH} | sed 's/\//%2F/g')?alt=media"
helm install alloydbomni-operator alloydbomni-operator-${OPERATOR_VERSION}.tgz \
--create-namespace \
--namespace alloydb-omni-system \
--atomic \
--timeout 5m
More detailed instructions are here.
Verify the Operator pods are ready. Eg:
❯ kubectl get pods -n alloydb-omni-system
NAME READY STATUS RESTARTS AGE
fleet-controller-manager-69f66b5db4-fttjz 2/2 Running 0 2m4s
local-controller-manager-76f678d8d8-v2ds7 2/2 Running 0 2m4s
Step 3: Create the AlloyDB Omni Database Cluster
With the operator running, you can now define and deploy your database instance using a DBCluster custom resource.
Define Variables: Choose a name, a dedicated namespace, and a password. Also, as of this writing, the latest supported DB version is 17.5.0.
export DB_CLUSTER_NAME="my-omni-db"
export DB_CLUSTER_NAMESPACE="my-db-cluster-namespace"
# Encode your password in base64 (e.g., 'ChangeMe123' becomes 'Q2hhbmdlTWUxMjM=')
export ENCODED_PASSWORD="Q2hhbmdlTWUxMjM="
export DB_VERSION="17.5.0"
Create Namespace: Create the namespace for isolation.
kubectl create namespace ${DB_CLUSTER_NAMESPACE}
Create Manifest File: Create a file named db-cluster.yaml. This manifest defines the password as a Kubernetes Secret and the DBCluster resource.
cat << EOF > db-cluster.yaml
# db-cluster.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-pw-${DB_CLUSTER_NAME}
namespace: ${DB_CLUSTER_NAMESPACE}
type: Opaque
data:
# The key must match the DB_CLUSTER_NAME
${DB_CLUSTER_NAME}: "${ENCODED_PASSWORD}"
---
apiVersion: alloydbomni.dbadmin.goog/v1
kind: DBCluster
metadata:
name: ${DB_CLUSTER_NAME}
namespace: ${DB_CLUSTER_NAMESPACE}
spec:
# Use the latest available stable version
databaseVersion: "${DB_VERSION}"
primarySpec:
adminUser:
passwordRef:
name: db-pw-${DB_CLUSTER_NAME}
resources:
cpu: 2
memory: 4Gi
disks:
- name: DataDisk
size: 10Gi
EOF
Apply the Manifest:
kubectl apply -f db-cluster.yaml
Eg:
❯ kubectl apply -f db-cluster.yaml
secret/db-pw-my-omni-db unchanged
dbcluster.alloydbomni.dbadmin.goog/my-omni-db created
The operator will now provision the primary instance, which may take several minutes. You can track the status by running:
kubectl get dbcluster -n ${DB_CLUSTER_NAMESPACE} -w
Eg:
❯ kubectl get dbcluster -n ${DB_CLUSTER_NAMESPACE}
NAME PRIMARYENDPOINT PRIMARYPHASE DBCLUSTERPHASE HAREADYSTATUS HAREADYREASON
my-omni-db 10.101.237.157 Ready DBClusterReady
Step 4: Connect to the Database
Once the cluster’s status is Running, you can connect to it. The easiest way is to use kubectl exec to run the psql client directly inside the AlloyDB Omni pod.
Get the Pod Name:
export DB_POD_NAME=$(kubectl get pods -n ${DB_CLUSTER_NAMESPACE} -l 'dbs.internal.dbadmin.goog/ha-role=Primary' -o jsonpath='{.items[0].metadata.name}')
Connect with psql:
kubectl exec -it ${DB_POD_NAME} -n ${DB_CLUSTER_NAMESPACE} -- psql -U postgres -h localhost
Password is ChangeMe123 that you set in the secret.
Eg:
❯ kubectl exec -it ${DB_POD_NAME} -n ${DB_CLUSTER_NAMESPACE} -- psql -U postgres -h localhost
Defaulted container "database" out of: database, logrotate-agent, monitoring-agent, dbinit (init)
Password for user postgres:
psql (17.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_128_GCM_SHA256, compression: off)
Type "help" for help.
postgres=#
You will now be at the postgres=# prompt, ready to run SQL commands against your fully deployed AlloyDB Omni instance.
Cleanup
You can delete resources allocated to minikube by running
minikube delete
Eg:
❯ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /usr/local/google/home/hsiddulugari/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
Conclusion
Deploying AlloyDB Omni on minikube using its Kubernetes Operator is a fantastic way to foray into the product's cloud-native capabilities. This setup will get you a powerful, PostgreSQL-compatible database to play with.
Have a beefy high end machine with lots of CPU, memory and want to try a hands-on introduction to managing AlloyDB Omni lifecycle (scaling, replication, backups/restore, etc.) using Kubernetes standard tools? There’s a new article coming.
AlloyDB Omni on Kubernetes was originally published in Google Cloud – Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source Credit: https://medium.com/google-cloud/alloydb-omni-on-kubernetes-8c62dc3cfc14?source=rss—-e52cf94d98af—4
