Installation

This guide covers deploying ubTrace in environments with no internet access, such as automotive OEM networks with strict security policies (TISAX, ISO 27001).

Quick Start

Prerequisites: Docker Engine 24+ with Compose v2, ~5 GB disk for images.

# 1. Extract the bundle
tar xzf ubtrace-offline-bundle-*.tar.gz
cd ubtrace-offline-bundle-*

# 2. Load Docker images into the local daemon
./offline-load.sh ubtrace-images-*.tar.gz

# 3. Create .env from the bundled template and fill in passwords
make init
vim .env  # Replace password placeholders (see "Passwords & Secrets" below)

# 4. Start ubTrace
make up

# 5. Verify all services are healthy
make status

# 6. (Optional) Import your documentation artifacts
#    The worker auto-detects changes -- no restart needed.
#    Pre-built CI output:
#      make import-build SRC=./ci-output ORG=mycompany PROJECT=my-project VERSION=v1
#    Or Sphinx source (builder builds automatically):
#      make import-src SRC=./my-sphinx-project ORG=mycompany PROJECT=my-project VERSION=v1

Note

macOS / Apple Silicon: Elasticsearch requires a workaround before step 4. See Platform Notes.

First Login

The bundled Keycloak realm includes a pre-created test user so you can log in immediately after deployment. This user only works when the pre-configured OIDC client secret is kept unchanged (see Passwords & Secrets below):

Username

test

Password

Test1234!

Email

test-keycloak-user@useblocks.com

Open the ubTrace frontend (default: http://localhost:7155) and log in with these credentials to verify that the system is working.

Note

If you have already rotated the OIDC client secret (see Keycloak / OIDC Hardening below), this test user will still exist in Keycloak but the backend must be configured with the new secret for authentication to work.

Warning

This test user is intended for initial verification only. Before going to production, delete it and create your own users in Keycloak. See Keycloak / OIDC Hardening for the full hardening checklist.

Platform Notes

macOS / Apple Silicon (M1/M2/M3/M4)

The offline bundle ships linux/amd64 images targeting production Linux servers. On Apple Silicon Macs, most services run fine under Rosetta emulation, but Elasticsearch 9.x will fail with a seccomp unavailable error because the emulation layer does not support the Linux seccomp sandbox.

Workaround: Pull the native arm64 Elasticsearch image (requires internet):

docker pull docker.elastic.co/elasticsearch/elasticsearch:9.3.0
make up

This replaces the bundled amd64 image with the native arm64 variant for Elasticsearch only. You can ignore the platform (linux/amd64) does not match warnings for other services – they run correctly under Rosetta.

Note

This is only needed for local testing on Mac. Production deployments on Linux/amd64 servers work without any workaround.

Linux (Production)

No special steps required. The bundled images are linux/amd64 and run natively. Ensure Docker Engine 24+ with Compose v2 is installed, and that the user running Docker has permissions to load images (docker load).

Configuration

Passwords & Secrets

The bundle includes .env.example with all required variables and sensible defaults. make init copies it to .env – you only need to replace the <CHANGE-ME> placeholders with strong, unique passwords:

  • POSTGRES_PASSWORD – application database

  • KC_DB_PASSWORD – Keycloak database

  • KEYCLOAK_ADMIN_PASSWORD – Keycloak admin console

  • REDIS_PASSWORD – cache and sessions

  • OIDC_CLIENT_SECRET – see note below

Important

OIDC Client Secret: The bundled Keycloak realm ships with a pre-configured client secret. The .env.example already contains the correct value (FnZuseq02pwaNKqsvDxL3jq4HhzPey2b). Do not change it unless you also regenerate the secret in Keycloak (see Keycloak / OIDC Hardening below). Changing the .env value without updating Keycloak will cause authentication failures on startup.

Warning

Do not use the # character in password values. Docker Compose treats # as an inline comment delimiter, which causes health check scripts and service commands to receive truncated values. Use only alphanumeric characters and symbols like !@$%^&*()-_=+ instead.

See Environment Variables for the complete list.

Accessing via IP Address (HTTP)

The default setup assumes HTTPS or localhost. If you access ubTrace over plain HTTP using an IP address (e.g., http://192.168.1.100:7155 in a lab or workshop network), authentication cookies will be silently rejected by browsers because they require Secure (HTTPS-only) by default.

To fix this, set COOKIE_SECURE=false in your .env:

COOKIE_SECURE=false

# All URL env vars must use the same protocol and host:
API_SERVER_PUBLIC_URL=http://192.168.1.100:7150
FRONTEND_PUBLIC_URL=http://192.168.1.100:7155
FRONTEND_UBT_URL=http://192.168.1.100:7155
UBTRACE_API_URL=http://192.168.1.100:7150/api
KC_HOSTNAME=http://192.168.1.100:7181
OIDC_ISSUER=http://192.168.1.100:7181/realms/ubtrace

Then restart: make restart-api

Warning

Only use COOKIE_SECURE=false in trusted networks. For production deployments, place a reverse proxy (nginx, Traefik, Caddy) in front of ubTrace with TLS termination and keep COOKIE_SECURE=true.

Keycloak / OIDC Hardening

Before going live, rotate the client secret and remove the pre-created test user in Keycloak:

  1. Log in to Keycloak admin (http://localhost:7181, admin / your admin password)

  2. In the left sidebar, click Manage realms, then select ubtrace (you’ll land on the “Welcome to ubtrace” page)

  3. Go to Clientsnestjs-appCredentials tab → Regenerate Secret

  4. Copy the new secret into .env as OIDC_CLIENT_SECRET

  5. Go to Users and delete the pre-created test user

  6. Restart the API server: make restart-api

Importing Artifacts

ubTrace uses a worker pipeline that automatically detects new or changed artifacts. No manual rebuild or API server restart is needed.

Option B: Sphinx Source Files

If you want ubTrace to build your Sphinx projects, place source files into input_src/:

# Using the import helper
make import-src SRC=./my-sphinx-project ORG=mycompany PROJECT=my-project VERSION=v1

# Or copy manually
mkdir -p input_src/mycompany/my-project/v1
cp -r my-sphinx-project/* input_src/mycompany/my-project/v1/

The ubtrace-builder service polls input_src/ every 60 seconds, builds with Sphinx, and writes output to input_src_build/ (internal volume). The ub-worker then picks up the build output automatically.

Each version directory must be a self-contained Sphinx project with its own conf.py. The builder auto-detects three layouts:

Layout

Description

Example

B

conf.py inside source/

v1/source/conf.py + v1/source/index.rst

A

conf.py at root + source/ subdir

v1/conf.py + v1/source/index.rst

C

Flat (conf.py alongside content)

v1/conf.py + v1/index.rst

Directory Overview

Directory

Customer-facing

Purpose

input_src/

Yes (mount)

Sphinx source files for builder

input_build/

Yes (mount)

Pre-built CI output for worker

input_src_build/

No (internal)

Builder output (optionally mountable for debug)

output/

No (internal)

Worker-processed data: NDJSON + Sphinx output (optionally mountable for debug)

Day-to-Day Operations

Command

Description

make status

Show service health

make logs

Follow all logs (make logs SERVICE=ubtrace-api for one)

make up

Start all services

make down

Stop all services (preserves data)

make restart-api

Restart the API server only

make import-build

Import pre-built CI output into input_build/

make import-src

Import Sphinx source into input_src/ for builder

make verify

Run the offline verification script

make pull

Pull latest images (requires internet or registry)

make update

Pull latest images and restart

make clean

Stop services and delete all data volumes

make help

Show all available targets

Troubleshooting

Images not loading

# Verify Docker daemon is running
docker info

# List loaded ubTrace-related images
docker images | grep -E 'postgres|redis|elasticsearch|keycloak|ub-backend|ub-frontend|ub-builder|ub-worker'

Compose config errors

# Validate compose file resolves all variables correctly
UBTRACE_VERSION=1.1.1 docker compose -f docker-compose-prod.yml config

Services not starting

# Check service logs
docker compose -f docker-compose-prod.yml logs <service-name>

# Check health status of all services
docker compose -f docker-compose-prod.yml ps

Password authentication failed (stale Docker volumes)

If PostgreSQL or Keycloak fail with FATAL: password authentication failed, this is almost always caused by stale Docker volumes from a previous installation. PostgreSQL only sets passwords on first initialization – changing passwords in .env has no effect on existing database volumes.

Warning

docker system prune -a does NOT remove named volumes.

To fully reset and start fresh:

# Stop all services AND remove named volumes
docker compose -f docker-compose-prod.yml down -v

# Then start again
make up

Private registry TLS issues

If using a self-signed certificate with your private registry:

# Add your CA certificate to Docker's trusted certs
sudo mkdir -p /etc/docker/certs.d/harbor.corp.example
sudo cp ca.crt /etc/docker/certs.d/harbor.corp.example/
sudo systemctl restart docker

Advanced

Private Registry Mirror

For enterprise deployments with a private Docker registry (Harbor, Nexus, GitLab Registry, or Docker Distribution).

Step 1: Set up a private registry (if needed)

docker run -d \
  --name registry \
  --restart=unless-stopped \
  -p 5000:5000 \
  -v registry-data:/var/lib/registry \
  registry:3

For production, configure TLS and authentication per your registry’s documentation.

Step 2: Mirror images

On a machine with internet access and network access to the private registry:

./scripts/offline-mirror.sh \
  --registry harbor.corp.example/ubtrace \
  --version 1.0.0

This pulls all 10 images from public registries, re-tags them with flattened paths under the private registry, and pushes them:

Public image

Mirrored as

postgres:16-alpine

harbor.corp.example/ubtrace/postgres:16-alpine

redis:7-alpine

harbor.corp.example/ubtrace/redis:7-alpine

docker.elastic.co/elasticsearch/elasticsearch:9.3.0

harbor.corp.example/ubtrace/elasticsearch:9.3.0

quay.io/keycloak/keycloak:26.4.0

harbor.corp.example/ubtrace/keycloak:26.4.0

ghcr.io/useblocks/ub-backend-migrate:1.0.0

harbor.corp.example/ubtrace/ub-backend-migrate:1.0.0

ghcr.io/useblocks/ub-backend:1.0.0

harbor.corp.example/ubtrace/ub-backend:1.0.0

ghcr.io/useblocks/ub-frontend:1.0.0

harbor.corp.example/ubtrace/ub-frontend:1.0.0

ghcr.io/useblocks/ub-builder:1.0.0

harbor.corp.example/ubtrace/ub-builder:1.0.0

ghcr.io/useblocks/ub-worker:1.0.0

harbor.corp.example/ubtrace/ub-worker:1.0.0

ghcr.io/useblocks/keycloak-theme:1.0.0

harbor.corp.example/ubtrace/keycloak-theme:1.0.0

Step 3: Configure ubTrace

Set IMAGE_REGISTRY in your .env file:

IMAGE_REGISTRY=harbor.corp.example/ubtrace
UBTRACE_VERSION=1.1.1

Step 4: Deploy

docker compose -f docker-compose-prod.yml up -d

With IMAGE_REGISTRY set, all images are pulled from the private registry instead of public sources.

Verification

# Check images and compose config (no services started)
./scripts/offline-verify.sh --version 1.0.0

# Full verification: start services and wait for health checks
./scripts/offline-verify.sh --version 1.0.0 --up

# With private registry
IMAGE_REGISTRY=harbor.corp.example/ubtrace ./scripts/offline-verify.sh --version 1.0.0

Reference

Bundle Contents

File

Purpose

docker-compose-prod.yml

Production Compose configuration

.env.example

Environment variable template

Makefile

Convenience targets (make up, make down, …)

README-offline.md

This deployment guide

offline-load.sh

Load images from bundle into Docker

offline-verify.sh

Verify images, compose config, health checks

scripts/offline-bundle.sh

Create offline bundle (pull + save + package)

scripts/offline-mirror.sh

Re-tag and push images to a private registry

keycloak/import/ubtrace-realm.json

Pre-configured Keycloak realm (production)

input_src/README.md

Sphinx source directory structure guide

input_build/README.md

Pre-built artifact directory structure guide

Environment Variables

All customer-facing variables. IMAGE_REGISTRY and UBTRACE_VERSION control image sources; the rest configure services. When IMAGE_REGISTRY is empty or unset, images are pulled from their default public registries.

# Image Registry (leave empty for public Docker registries)
IMAGE_REGISTRY=

# Version
UBTRACE_VERSION=1.1.1

# PostgreSQL (Application Database)
POSTGRES_DB=ubtrace
POSTGRES_USER=ubtrace
POSTGRES_PASSWORD=<your-secure-password>

# PostgreSQL (Keycloak Database)
KC_DB_DATABASE=keycloak
KC_DB_USERNAME=keycloak
KC_DB_PASSWORD=<your-secure-password>

# Keycloak (Authentication Server)
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=<your-secure-password>
KC_HOSTNAME=http://localhost:7181
KEYCLOAK_PORT=7181

# Redis (Cache & Sessions)
REDIS_PASSWORD=<your-secure-password>
REDIS_MAXMEMORY=256mb
BACKCHANNEL_LOGOUT_TTL=15m

# Cookie Security (for HTTP / IP-based deployments)
COOKIE_SECURE=true            # Set to false for HTTP-only access (see above)

# API Server (NestJS Backend)
API_SERVER_PORT=7150
API_SERVER_PUBLIC_URL=http://localhost:7150
CLOUD_PROVIDER=local

# OIDC Configuration
OIDC_ISSUER=http://localhost:7181/realms/ubtrace
OIDC_CLIENT_ID=nestjs-app
# IMPORTANT: Must match the secret in Keycloak. Do not change unless you
# also regenerate it in Keycloak (see "Keycloak / OIDC Hardening").
OIDC_CLIENT_SECRET=FnZuseq02pwaNKqsvDxL3jq4HhzPey2b

# Frontend (Next.js)
FRONTEND_PORT=7155
FRONTEND_PUBLIC_URL=http://localhost:7155
FRONTEND_UBT_URL=http://localhost:7155
UBTRACE_API_URL=http://localhost:7150/api
# Legacy: traceability tree URL (keep default unless customized)
FRONTEND_VIS_TREE_URL=http://localhost:7140
# Additional CORS origins (comma-separated URLs, optional)
# CORS_EXTRA_ORIGINS=https://custom-app.example.com,https://other.example.com
CORS_EXTRA_ORIGINS=

# Elasticsearch (Search & Analytics)
ELASTICSEARCH_PORT=7184

# Ingest Endpoint (Optional)
INGEST_MAX_FILE_SIZE=524288000  # Max upload size in bytes (default: 500MB)

# Worker Pipeline
WORKER_POLL_INTERVAL_MS=30000   # Worker scanner polling interval (milliseconds)
BUILDER_POLL_INTERVAL=60     # Builder Sphinx build polling interval (seconds)

# License (required)
UBTRACE_LICENSE_FILE=/data/licenses/license.skm

See docker-compose-prod.yml for all available environment variables including advanced tuning parameters.

License Configuration

A valid license is required to use ubTrace. Without one, all feature API endpoints return 403 Forbidden. Only health checks, authentication, license status, and endpoint-availability endpoints remain accessible.

Contact support@useblocks.com for license activation.

License Status Values

Status

Meaning

Feature Endpoints

valid

License is active and not expired

Allowed

unlicensed

No license configured

Blocked (403)

expired

License was valid but has passed its expiry

Blocked (403)

invalid

License is revoked, blocked, or malformed

Blocked (403)

Offline License Activation

For air-gapped deployments, ubTrace supports offline license validation via .skm activation files:

  1. Place your .skm license file in the licenses/ directory next to docker-compose-prod.yml:

    licenses/
      license.skm
    

    Note

    The .skm file must be in signed activation format containing licenseKey (base64-encoded), signature, and result fields. Contact support@useblocks.com if your activation file uses a different format.

  2. Configure the license in your .env:

    UBTRACE_LICENSE_FILE=/data/licenses/license.skm
    

    The licenses/ directory is mounted read-only into the API server container at /data/licenses/.

  3. Restart the API server:

    make restart-api
    

Verify License Status

Check the current license status via the API:

curl -s http://localhost:7150/api/v1/license/status | python3 -m json.tool

Example responses:

Valid license:

{
  "status": "valid",
  "tier": "licensed",
  "expiresAt": "2026-12-31T23:59:59.000Z",
  "daysUntilExpiration": 288,
  "features": ["all-organizations", "full-access"]
}

No license configured:

{
  "status": "unlicensed",
  "tier": "free",
  "features": []
}

Expired license:

{
  "status": "expired",
  "tier": "free",
  "expiresAt": "2025-01-01T00:00:00.000Z",
  "features": []
}

Note

License changes take effect after restarting the API server:

make restart-api