Kubernetes Deployment¶
This guide covers deploying ubTrace on Kubernetes using the official Helm chart. The chart supports vanilla Kubernetes, AWS EKS, and OpenShift 4+.
For Docker Compose deployments (including air-gapped / offline), see Installation.
Prerequisites¶
Kubernetes 1.25+ cluster (1.28+ recommended)
Helm 3.12+
kubectlconfigured for your clusterA StorageClass that supports
ReadWriteOncePVCs(Optional) An Ingress controller (NGINX, ALB, etc.)
Note
The chart bundles PostgreSQL, Elasticsearch, Redis, and Keycloak as
sub-deployments. For production, consider using managed services
(RDS, OpenSearch, ElastiCache, etc.) and pointing the chart at them
via the external.* values.
Quick Start¶
# Clone the repository (or obtain the chart archive)
git clone https://github.com/useblocks/ubtrace.git
cd ubtrace
# Install with default values
helm install ubtrace ./deploy/helm/ubtrace -n ubtrace --create-namespace
# Watch pods come up
kubectl get pods -n ubtrace -w
The default values.yaml starts all infrastructure locally in the cluster
with sensible defaults. All services should be ready within 2-5 minutes,
depending on image pull times.
Platform-Specific Overlays¶
The chart ships with overlay files for common platforms. Use the -f flag
to layer them on top of the defaults.
Minikube (Local Development)¶
minikube start --cpus=4 --memory=8192
minikube addons enable ingress
helm install ubtrace ./deploy/helm/ubtrace \
-f deploy/helm/ubtrace/values-minikube.yaml \
-n ubtrace --create-namespace
The minikube overlay uses reduced resource requests and the standard
StorageClass. Ingress is enabled with the NGINX controller.
Add the following to /etc/hosts to access services via Ingress:
$(minikube ip) ubtrace.local ubtrace-api.local keycloak.local
Alternatively, use port-forwarding (no /etc/hosts changes needed):
# Frontend (service port 3000 → local 7155)
kubectl port-forward svc/ubtrace-frontend 7155:3000 -n ubtrace
# API (service port 3000 → local 7150)
kubectl port-forward svc/ubtrace-api 7150:3000 -n ubtrace
# Keycloak (service port 8080 → local 7181)
kubectl port-forward svc/ubtrace-keycloak 7181:8080 -n ubtrace
Important
When using port-forwarding, the OIDC URLs must point to localhost
instead of Ingress hostnames. Override during install:
helm install ubtrace ./deploy/helm/ubtrace \
-f deploy/helm/ubtrace/values-minikube.yaml \
--set oidc.issuer=http://localhost:7181/realms/ubtrace \
--set keycloak.hostname=http://localhost:7181 \
--set api.publicUrl=http://localhost:7150 \
--set api.config.frontendUbtUrl=http://localhost:7155 \
--set frontend.config.apiUrl=http://localhost:7150/api \
--set frontend.config.oidcIssuer=http://localhost:7181/realms/ubtrace \
--set frontend.config.authRedirectUrl=http://localhost:7155/auth/callback \
-n ubtrace --create-namespace
AWS EKS¶
helm install ubtrace ./deploy/helm/ubtrace \
-f deploy/helm/ubtrace/values-eks.yaml \
--set api.ingress.annotations."alb\.ingress\.kubernetes\.io/certificate-arn"=arn:aws:acm:REGION:ACCOUNT:certificate/CERT-ID \
--set api.ingress.hosts[0].host=ubtrace-api.example.com \
--set frontend.ingress.hosts[0].host=ubtrace.example.com \
--set keycloak.ingress.hosts[0].host=keycloak.example.com \
-n ubtrace --create-namespace
The EKS overlay pre-configures:
StorageClass:
gp3(EBS gp3 volumes)Ingress: AWS ALB Ingress Controller with HTTPS termination
Annotations: ALB scheme, target type, health check paths
Before deploying, ensure:
The AWS Load Balancer Controller is installed
An ACM certificate exists for your domain(s)
DNS records point to the ALB (created automatically by the controller)
Tip
For production EKS deployments, consider using managed services:
# In your custom values file
postgresql:
enabled: false
external:
host: "mydb.cluster-xxxxx.us-east-1.rds.amazonaws.com"
port: 5432
database: "ubtrace"
elasticsearch:
enabled: false
external:
url: "https://search-ubtrace-xxxxx.us-east-1.es.amazonaws.com"
redis:
enabled: false
external:
host: "ubtrace.xxxxx.ng.0001.use1.cache.amazonaws.com"
port: 6379
OpenShift 4+¶
helm install ubtrace ./deploy/helm/ubtrace \
-f deploy/helm/ubtrace/values-openshift.yaml \
-n ubtrace --create-namespace
The OpenShift overlay:
Sets
securityContext.runAsUser: nullso OpenShift assigns UIDs from the namespace range (required by SCCs)Disables Ingress objects (OpenShift uses Routes)
All ubTrace images are built with GID 0 (root group) permissions on their data
directories (chmod g=u), so they work without additional SCCs under the
default OpenShift restricted policy. The entrypoints automatically detect
OpenShift mode (non-root UID + GID 0) and skip operations that require root.
After deployment, expose services with OpenShift Routes:
oc expose svc/ubtrace-frontend -n ubtrace
oc expose svc/ubtrace-api -n ubtrace
oc expose svc/ubtrace-keycloak -n ubtrace
# For TLS-terminated Routes:
oc create route edge ubtrace-frontend \
--service=ubtrace-frontend \
--hostname=ubtrace.example.com \
-n ubtrace
Configuration¶
All configuration is done via values.yaml overrides. The chart follows
a consistent pattern for each component.
Image Configuration¶
global:
imageRegistry: "ghcr.io/useblocks" # Registry for ubTrace images
imagePullSecrets: [] # Pull secrets for private registries
imagePullPolicy: "IfNotPresent" # IfNotPresent, Always, or Never
# Per-component overrides (optional)
api:
image:
registry: "" # Falls back to global.imageRegistry
repository: ub-backend
tag: "" # Falls back to Chart.AppVersion
Infrastructure Toggles¶
Each infrastructure component can be disabled and replaced with an external service:
# Use external PostgreSQL (e.g., AWS RDS)
postgresql:
enabled: false
external:
host: "mydb.example.com"
port: 5432
database: "ubtrace"
# Use external Elasticsearch (e.g., AWS OpenSearch)
elasticsearch:
enabled: false
external:
url: "https://search.example.com"
# Use external Redis (e.g., AWS ElastiCache)
redis:
enabled: false
external:
host: "redis.example.com"
port: 6379
Passwords & Secrets¶
The chart generates a Kubernetes Secret with default passwords from
values.yaml. For production, either:
Option A: Override values at install time
helm install ubtrace ./deploy/helm/ubtrace \
--set postgresql.auth.password=<strong-password> \
--set postgresqlKeycloak.auth.password=<strong-password> \
--set keycloak.adminPassword=<strong-password> \
--set redis.auth.password=<strong-password> \
-n ubtrace --create-namespace
Option B: Use an existing Secret
# Create the secret manually
kubectl create secret generic ubtrace-secrets -n ubtrace \
--from-literal=postgresql-password=<pw> \
--from-literal=postgresql-keycloak-password=<pw> \
--from-literal=keycloak-admin-password=<pw> \
--from-literal=redis-password=<pw> \
--from-literal=oidc-client-secret=<secret>
# Reference it during install
helm install ubtrace ./deploy/helm/ubtrace \
--set api.existingSecret=ubtrace-secrets \
-n ubtrace --create-namespace
Important
OIDC Client Secret: The bundled Keycloak realm ships with a
pre-configured client secret (FnZuseq02pwaNKqsvDxL3jq4HhzPey2b).
Do not change the oidc.clientSecret value unless you also regenerate
the secret in Keycloak. See Keycloak / OIDC Hardening.
OIDC / Authentication¶
Configure the OIDC URLs to match your deployment’s public hostnames:
oidc:
issuer: "https://keycloak.example.com/realms/ubtrace"
clientId: "nestjs-app"
clientSecret: "FnZuseq02pwaNKqsvDxL3jq4HhzPey2b"
keycloak:
hostname: "https://keycloak.example.com"
api:
publicUrl: "https://ubtrace-api.example.com"
config:
frontendUbtUrl: "https://ubtrace.example.com"
frontend:
config:
apiUrl: "https://ubtrace-api.example.com/api"
oidcIssuer: "https://keycloak.example.com/realms/ubtrace"
authRedirectUrl: "https://ubtrace.example.com/auth/callback"
Persistence¶
The chart creates PVCs for shared pipeline volumes. Customize sizes and StorageClass per volume:
persistence:
output:
size: 20Gi
storageClass: "" # Uses global.storageClass if empty
accessMode: ReadWriteOnce
existingClaim: "" # Use a pre-existing PVC
inputSrcBuild:
size: 10Gi
inputBuild:
size: 10Gi
inputSrc:
size: 10Gi
keycloakThemeJar:
size: 128Mi
Resource Requests & Limits¶
Each component has configurable resource requests and limits:
Component |
CPU Request |
CPU Limit |
Memory Request |
Memory Limit |
|---|---|---|---|---|
API |
250m |
1 |
512Mi |
1Gi |
Frontend |
100m |
500m |
256Mi |
512Mi |
Worker |
500m |
2 |
1Gi |
4Gi |
Builder |
250m |
1 |
512Mi |
2Gi |
Keycloak |
250m |
1 |
512Mi |
1Gi |
PostgreSQL |
250m |
1 |
256Mi |
1Gi |
Elasticsearch |
500m |
2 |
1Gi |
2Gi |
Redis |
100m |
500m |
128Mi |
512Mi |
Override in your values file:
api:
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: "2"
memory: 2Gi
Importing Artifacts¶
ubTrace uses a worker pipeline that automatically detects new or changed artifacts. The pipeline reads from Persistent Volume Claims (PVCs). Since the worker and builder mount these PVCs as read-only, you need a temporary helper pod to write data into them.
Note
This is a key difference from Docker Compose deployments, where you can copy files directly into host-mounted directories. On Kubernetes, PVC access requires a pod with the volume mounted.
Option A: Pre-Built CI Output (Recommended)¶
If your CI pipeline already builds with ubt_sphinx, copy the output into
the input-build PVC.
Step 1: Start a temporary import pod
kubectl run ubtrace-import --image=busybox --restart=Never \
--overrides='{
"spec": {
"containers": [{
"name": "import",
"image": "busybox",
"command": ["sleep", "3600"],
"volumeMounts": [{
"name": "input-build",
"mountPath": "/data/input_build"
}]
}],
"volumes": [{
"name": "input-build",
"persistentVolumeClaim": {
"claimName": "ubtrace-input-build"
}
}]
}
}' -n ubtrace
# Wait for the pod to be ready
kubectl wait --for=condition=Ready pod/ubtrace-import -n ubtrace --timeout=30s
Step 2: Create the directory structure and copy artifacts
The required path inside the PVC is {org}/{project}/{version}/. These
values must match the fields in config/ubtrace_project.toml.
# Create directories
kubectl exec ubtrace-import -n ubtrace -- \
mkdir -p /data/input_build/mycompany/my-project/v1
# Copy pre-built artifacts
kubectl cp ./ci-output/mycompany/my-project/v1 \
ubtrace/ubtrace-import:/data/input_build/mycompany/my-project/v1 -n ubtrace
Step 3: Verify and clean up
# Verify the structure
kubectl exec ubtrace-import -n ubtrace -- \
find /data/input_build -maxdepth 4 -type d
# Delete the temporary pod
kubectl delete pod ubtrace-import -n ubtrace
The worker polls input_build/ every 30 seconds and processes new versions
automatically. No restart is needed.
Tip
If you have the PVC_NAME from a custom existingClaim, replace
ubtrace-input-build in the pod spec with your claim name.
Option B: Sphinx Source Files¶
To have ubTrace build your Sphinx projects, copy source files into the
input-src PVC using the same temporary pod pattern:
# Start a temporary import pod with the input-src PVC
kubectl run ubtrace-import-src --image=busybox --restart=Never \
--overrides='{
"spec": {
"containers": [{
"name": "import",
"image": "busybox",
"command": ["sleep", "3600"],
"volumeMounts": [{
"name": "input-src",
"mountPath": "/data/input_src"
}]
}],
"volumes": [{
"name": "input-src",
"persistentVolumeClaim": {
"claimName": "ubtrace-input-src"
}
}]
}
}' -n ubtrace
kubectl wait --for=condition=Ready pod/ubtrace-import-src -n ubtrace --timeout=30s
# Create directory and copy Sphinx source
kubectl exec ubtrace-import-src -n ubtrace -- \
mkdir -p /data/input_src/mycompany/my-project/v1
kubectl cp ./my-sphinx-project/ \
ubtrace/ubtrace-import-src:/data/input_src/mycompany/my-project/v1 -n ubtrace
# Clean up
kubectl delete pod ubtrace-import-src -n ubtrace
The builder polls input_src/ every 60 seconds, builds with Sphinx, and
passes output to the worker.
See Installation for details on directory structure, supported Sphinx
layouts, and the required ubtrace_project.toml format.
First Login¶
The bundled Keycloak realm includes a pre-created test user:
Username |
|
Password |
|
|
Open the ubTrace frontend and log in with these credentials. Before going to production, delete this user and rotate the OIDC client secret in Keycloak (see Keycloak / OIDC Hardening).
Day-to-Day Operations¶
Upgrading¶
# Update chart values (e.g., new image tag)
helm upgrade ubtrace ./deploy/helm/ubtrace \
-f my-values.yaml \
-n ubtrace
# Check rollout status
kubectl rollout status deployment/ubtrace-api -n ubtrace
Rolling Back¶
# List releases
helm history ubtrace -n ubtrace
# Rollback to a previous revision
helm rollback ubtrace <REVISION> -n ubtrace
Checking Status¶
# All pods
kubectl get pods -n ubtrace
# Service endpoints
kubectl get svc -n ubtrace
# PVC usage
kubectl get pvc -n ubtrace
# Logs for a specific component
kubectl logs -l app.kubernetes.io/component=api -n ubtrace --tail=100
Scaling¶
The API and frontend support horizontal scaling:
# Scale API replicas
helm upgrade ubtrace ./deploy/helm/ubtrace \
--set api.replicaCount=3 \
-n ubtrace
Note
The worker and builder are single-instance by design (they process shared PVC data). Do not scale them beyond 1 replica.
Uninstalling¶
helm uninstall ubtrace -n ubtrace
Warning
This removes all Kubernetes resources but preserves PVCs by default. To delete data volumes:
kubectl delete pvc -l app.kubernetes.io/instance=ubtrace -n ubtrace
Troubleshooting¶
Pods stuck in Pending¶
Usually caused by insufficient resources or missing StorageClass:
kubectl describe pod <pod-name> -n ubtrace
# Check if the StorageClass exists
kubectl get sc
Keycloak not ready¶
Keycloak requires its database to be running first and takes 1-3 minutes for the initial startup (realm import). Check the startup probe:
kubectl logs -l app.kubernetes.io/component=keycloak -n ubtrace --tail=50
# The startup probe allows up to 5 minutes (60 attempts x 5s)
kubectl describe pod -l app.kubernetes.io/component=keycloak -n ubtrace
OIDC redirect errors¶
The most common cause is mismatched URLs. Ensure oidc.issuer,
keycloak.hostname, api.publicUrl, and frontend.config.* all
use the correct public hostnames for your deployment.
For port-forwarding setups, all URLs must use localhost with the
forwarded ports (see the minikube section above).
Database authentication failures¶
If PostgreSQL pods fail with FATAL: password authentication failed,
this is caused by stale PVCs from a previous installation. PostgreSQL only
sets passwords on first initialization.
# Delete the PVCs and reinstall
helm uninstall ubtrace -n ubtrace
kubectl delete pvc -l app.kubernetes.io/instance=ubtrace -n ubtrace
helm install ubtrace ./deploy/helm/ubtrace -n ubtrace
Private Registry¶
To pull images from a private registry:
# Create a pull secret
kubectl create secret docker-registry ubtrace-pull-secret \
--docker-server=harbor.corp.example \
--docker-username=robot\$ubtrace \
--docker-password=<token> \
-n ubtrace
# Install with private registry settings
helm install ubtrace ./deploy/helm/ubtrace \
--set global.imageRegistry=harbor.corp.example/ubtrace \
--set global.imagePullSecrets[0].name=ubtrace-pull-secret \
-n ubtrace --create-namespace
Values Reference¶
For the complete list of configurable values, see:
# Print all default values
helm show values ./deploy/helm/ubtrace
Key files in the chart:
File |
Purpose |
|---|---|
|
Default configuration |
|
Minikube overlay (reduced resources, NGINX ingress) |
|
AWS EKS overlay (ALB, gp3, TLS) |
|
OpenShift 4+ overlay (SCC-compatible, no Ingress) |