Installation¶
This guide covers deploying ubTrace in environments with no internet access, such as automotive OEM networks with strict security policies (TISAX, ISO 27001).
Quick Start¶
Prerequisites: see System Requirements for full details. Minimum: Docker Engine 24+ with Compose v2, 4 CPU, 8 GB RAM, 20 GB disk.
# 1. Extract the bundle
tar xzf ubtrace-offline-bundle-*.tar.gz
cd ubtrace-offline-bundle-*
# 2. Load Docker images into the local daemon
./offline-load.sh ubtrace-images-*.tar.gz
# 3. Create .env from the bundled template and fill in passwords
make init
vim .env # Replace every <CHANGE-ME> placeholder with a strong password
# 4. Start ubTrace
make up
# 5. Verify all services are healthy
make status
# 6. (Optional) Import your documentation artifacts
# The worker auto-detects changes -- no restart needed.
# Pre-built CI output:
# make import-build SRC=./ci-output ORG=mycompany PROJECT=my-project VERSION=v1
# Or Sphinx source (builder builds automatically):
# make import-src SRC=./my-sphinx-project ORG=mycompany PROJECT=my-project VERSION=v1
Note
macOS / Apple Silicon: Elasticsearch requires a workaround before step 4. See Platform Notes.
Platform Notes¶
macOS / Apple Silicon (M1/M2/M3/M4)¶
The offline bundle ships linux/amd64 images targeting production Linux servers.
On Apple Silicon Macs, most services run fine under Rosetta emulation, but
Elasticsearch 9.x will fail with a seccomp unavailable error because
the emulation layer does not support the Linux seccomp sandbox.
Workaround: Pull the native arm64 Elasticsearch image (requires internet):
docker pull docker.elastic.co/elasticsearch/elasticsearch:9.3.0
make up
This replaces the bundled amd64 image with the native arm64 variant for
Elasticsearch only. You can ignore the platform (linux/amd64) does not match
warnings for other services – they run correctly under Rosetta.
Note
This is only needed for local testing on Mac. Production deployments on Linux/amd64 servers work without any workaround.
Linux (Production)¶
No special steps required. The bundled images are linux/amd64 and run natively.
Ensure Docker Engine 24+ with Compose v2 is installed, and that the user running
Docker has permissions to load images (docker load).
Configuration¶
Passwords & Secrets¶
The bundle includes .env.example with all required variables and sensible
defaults. make init copies it to .env – you only need to replace the
<CHANGE-ME> placeholders with strong, unique passwords:
POSTGRES_PASSWORD– application databaseKC_DB_PASSWORD– Keycloak databaseKEYCLOAK_ADMIN_PASSWORD– Keycloak admin consoleREDIS_PASSWORD– cache and sessionsOIDC_CLIENT_SECRET– OIDC client secret
Warning
Do not use the # character in password values. Docker Compose treats
# as an inline comment delimiter, which causes health check scripts
and service commands to receive truncated values. Use only alphanumeric
characters and symbols like !@$%^&*()-_=+ instead.
For quick-start / evaluation, you can use the pre-configured client secret that ships with the bundled Keycloak realm:
OIDC_CLIENT_SECRET=FnZuseq02pwaNKqsvDxL3jq4HhzPey2b
See Environment Variables for the complete list.
Keycloak / OIDC Hardening¶
Before going live, rotate the client secret and remove the pre-created test user in Keycloak:
Log in to Keycloak admin (
http://localhost:7181,admin/ your admin password)In the left sidebar, click Manage realms, then select ubtrace (you’ll land on the “Welcome to ubtrace” page)
Go to Clients → nestjs-app → Credentials tab → Regenerate Secret
Copy the new secret into
.envasOIDC_CLIENT_SECRETGo to Users and delete the pre-created test user
Restart the API server:
make restart-api
Importing Artifacts¶
ubTrace uses a worker pipeline that automatically detects new or changed artifacts. No manual rebuild or API server restart is needed.
Option A: Pre-Built CI Output (Recommended)¶
If your CI pipeline already builds with ubt_sphinx, place the output into
input_build/:
# Using the import helper
make import-build SRC=./ci-output ORG=mycompany PROJECT=my-project VERSION=v1
# Or copy manually
mkdir -p input_build/mycompany/my-project/v1
cp -r ci-output/* input_build/mycompany/my-project/v1/
The ub-worker service polls input_build/ every 30 seconds, runs Rust
preprocessing, and loads data into the database.
Required structure (Layout A / ubt_sphinx output):
input_build/{org}/{project}/{version}/
docs/ubtrace/needs.json # required (fixed path)
docs/ubtrace/*.fjson # document files
config/ubtrace_project.toml
reports/schema_violations.json # optional
Tip
Use an atomic write pattern (staging directory + mv) to avoid
partial-read races with the scanner.
Option B: Sphinx Source Files¶
If you want ubTrace to build your Sphinx projects, place source files into
input_src/:
# Using the import helper
make import-src SRC=./my-sphinx-project ORG=mycompany PROJECT=my-project VERSION=v1
# Or copy manually
mkdir -p input_src/mycompany/my-project/v1
cp -r my-sphinx-project/* input_src/mycompany/my-project/v1/
The ubtrace-builder service polls input_src/ every 60 seconds, builds
with Sphinx, and writes output to input_src_build/ (internal volume). The
ub-worker then picks up the build output automatically.
Each version directory must be a self-contained Sphinx project with its own
conf.py. The builder auto-detects three layouts:
Layout |
Description |
Example |
|---|---|---|
B |
|
|
A |
|
|
C |
Flat ( |
|
Directory Overview¶
Directory |
Customer-facing |
Purpose |
|---|---|---|
|
Yes (mount) |
Sphinx source files for builder |
|
Yes (mount) |
Pre-built CI output for worker |
|
No (internal) |
Builder output (optionally mountable for debug) |
|
No (internal) |
Worker-processed data: NDJSON + Sphinx output (optionally mountable for debug) |
Day-to-Day Operations¶
Command |
Description |
|---|---|
|
Show service health |
|
Follow all logs ( |
|
Start all services |
|
Stop all services (preserves data) |
|
Restart the API server only |
|
Import pre-built CI output into |
|
Import Sphinx source into |
|
Run the offline verification script |
|
Pull latest images (requires internet or registry) |
|
Pull latest images and restart |
|
Stop services and delete all data volumes |
|
Show all available targets |
Troubleshooting¶
Images not loading¶
# Verify Docker daemon is running
docker info
# List loaded ubTrace-related images
docker images | grep -E 'postgres|redis|elasticsearch|keycloak|ub-backend|ub-frontend|ub-builder|ub-worker'
Compose config errors¶
# Validate compose file resolves all variables correctly
UBTRACE_VERSION=1.0.3 docker compose -f docker-compose-prod.yml config
Services not starting¶
# Check service logs
docker compose -f docker-compose-prod.yml logs <service-name>
# Check health status of all services
docker compose -f docker-compose-prod.yml ps
Private registry TLS issues¶
If using a self-signed certificate with your private registry:
# Add your CA certificate to Docker's trusted certs
sudo mkdir -p /etc/docker/certs.d/harbor.corp.example
sudo cp ca.crt /etc/docker/certs.d/harbor.corp.example/
sudo systemctl restart docker
Advanced¶
Private Registry Mirror¶
For enterprise deployments with a private Docker registry (Harbor, Nexus, GitLab Registry, or Docker Distribution).
Step 1: Set up a private registry (if needed)
docker run -d \
--name registry \
--restart=unless-stopped \
-p 5000:5000 \
-v registry-data:/var/lib/registry \
registry:3
For production, configure TLS and authentication per your registry’s documentation.
Step 2: Mirror images
On a machine with internet access and network access to the private registry:
./scripts/offline-mirror.sh \
--registry harbor.corp.example/ubtrace \
--version 1.0.0
This pulls all 10 images from public registries, re-tags them with flattened paths under the private registry, and pushes them:
Public image |
Mirrored as |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 3: Configure ubTrace
Set IMAGE_REGISTRY in your .env file:
IMAGE_REGISTRY=harbor.corp.example/ubtrace
UBTRACE_VERSION=1.0.3
Step 4: Deploy
docker compose -f docker-compose-prod.yml up -d
With IMAGE_REGISTRY set, all images are pulled from the private registry
instead of public sources.
Verification¶
# Check images and compose config (no services started)
./scripts/offline-verify.sh --version 1.0.0
# Full verification: start services and wait for health checks
./scripts/offline-verify.sh --version 1.0.0 --up
# With private registry
IMAGE_REGISTRY=harbor.corp.example/ubtrace ./scripts/offline-verify.sh --version 1.0.0
Reference¶
Bundle Contents¶
File |
Purpose |
|---|---|
|
Production Compose configuration |
|
Environment variable template |
|
Convenience targets ( |
|
This deployment guide |
|
Load images from bundle into Docker |
|
Verify images, compose config, health checks |
|
Create offline bundle (pull + save + package) |
|
Re-tag and push images to a private registry |
|
Pre-configured Keycloak realm (production) |
|
Sphinx source directory structure guide |
|
Pre-built artifact directory structure guide |
Environment Variables¶
All customer-facing variables. IMAGE_REGISTRY and UBTRACE_VERSION control
image sources; the rest configure services. When IMAGE_REGISTRY is empty or
unset, images are pulled from their default public registries.
# Image Registry (leave empty for public Docker registries)
IMAGE_REGISTRY=
# Version
UBTRACE_VERSION=1.0.3
# PostgreSQL (Application Database)
POSTGRES_DB=ubtrace
POSTGRES_USER=ubtrace
POSTGRES_PASSWORD=<your-secure-password>
# PostgreSQL (Keycloak Database)
KC_DB_DATABASE=keycloak
KC_DB_USERNAME=keycloak
KC_DB_PASSWORD=<your-secure-password>
# Keycloak (Authentication Server)
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=<your-secure-password>
KC_HOSTNAME=http://localhost:7181
KEYCLOAK_PORT=7181
# Redis (Cache & Sessions)
REDIS_PASSWORD=<your-secure-password>
REDIS_MAXMEMORY=256mb
BACKCHANNEL_LOGOUT_TTL=15m
# API Server (NestJS Backend)
API_SERVER_PORT=7150
API_SERVER_PUBLIC_URL=http://localhost:7150
CLOUD_PROVIDER=local
# OIDC Configuration
OIDC_ISSUER=http://localhost:7181/realms/ubtrace
OIDC_CLIENT_ID=nestjs-app
OIDC_CLIENT_SECRET=<your-client-secret>
# Frontend (Next.js)
FRONTEND_PORT=7155
FRONTEND_PUBLIC_URL=http://localhost:7155
FRONTEND_UBT_URL=http://localhost:7155
UBTRACE_API_URL=http://localhost:7150/api
# Legacy: traceability tree URL (keep default unless customized)
FRONTEND_VIS_TREE_URL=http://localhost:7140
# Elasticsearch (Search & Analytics)
ELASTICSEARCH_PORT=7184
ELASTICSEARCH_USERNAME=elastic
ELASTICSEARCH_PASSWORD=<your-elasticsearch-password>
# Worker Pipeline
WORKER_POLL_INTERVAL_MS=30000 # Worker scanner polling interval (milliseconds)
BUILDER_POLL_INTERVAL=60 # Builder Sphinx build polling interval (seconds)
# License (Optional - free tier without license, useblocks org only)
UBTRACE_LICENSE_FILE=
See docker-compose-prod.yml for all available environment variables including
advanced tuning parameters.
License Configuration¶
By default, ubTrace runs in free tier mode: the application is fully
functional but access is limited to the useblocks demo organization. You can
explore the demo data, but cannot create or access your own organizations.
To unlock all organizations, you need a license. Contact support@useblocks.com for license activation.
Offline License Activation¶
For air-gapped deployments, ubTrace supports offline license validation via
.skm activation files:
Place your
.skmlicense file in thelicenses/directory next todocker-compose-prod.yml:licenses/ license.skm
Configure the license in your
.env:UBTRACE_LICENSE_FILE=/data/licenses/license.skm
The RSA public key for signature verification is pre-configured in the application. The
licenses/directory is mounted read-only into the API server container at/data/licenses/.Restart the API server:
make restart-api
Verify License Status¶
Check the current license status via the API:
curl -s http://localhost:7150/api/v1/license/status | python3 -m json.tool
Note
License changes take effect after restarting the API server:
make restart-api