Installation¶
This guide covers deploying ubTrace in environments with no internet access, such as automotive OEM networks with strict security policies (TISAX, ISO 27001).
Quick Start¶
Prerequisites: Docker Engine 24+ with Compose v2, ~5 GB disk for images.
# 1. Extract the bundle
tar xzf ubtrace-offline-bundle-*.tar.gz
cd ubtrace-offline-bundle-*
# 2. Load Docker images into the local daemon
./offline-load.sh ubtrace-images-*.tar.gz
# 3. Create .env from the bundled template and fill in passwords
make init
vim .env # Replace password placeholders (see "Passwords & Secrets" below)
# 4. Start ubTrace
make up
# 5. Verify all services are healthy
make status
# 6. (Optional) Import your documentation artifacts
# The worker auto-detects changes -- no restart needed.
# Pre-built CI output:
# make import-build SRC=./ci-output ORG=mycompany PROJECT=my-project VERSION=v1
# Or Sphinx source (builder builds automatically):
# make import-src SRC=./my-sphinx-project ORG=mycompany PROJECT=my-project VERSION=v1
Note
macOS / Apple Silicon: Elasticsearch requires a workaround before step 4. See Platform Notes.
First Login¶
The bundled Keycloak realm includes a pre-created test user so you can log in immediately after deployment. This user only works when the pre-configured OIDC client secret is kept unchanged (see Passwords & Secrets below):
Username |
|
Password |
|
|
Open the ubTrace frontend (default: http://localhost:7155) and log in with
these credentials to verify that the system is working.
Note
If you have already rotated the OIDC client secret (see Keycloak / OIDC Hardening below), this test user will still exist in Keycloak but the backend must be configured with the new secret for authentication to work.
Warning
This test user is intended for initial verification only. Before going to production, delete it and create your own users in Keycloak. See Keycloak / OIDC Hardening for the full hardening checklist.
Platform Notes¶
macOS / Apple Silicon (M1/M2/M3/M4)¶
The offline bundle ships linux/amd64 images targeting production Linux servers.
On Apple Silicon Macs, most services run fine under Rosetta emulation, but
Elasticsearch 9.x will fail with a seccomp unavailable error because
the emulation layer does not support the Linux seccomp sandbox.
Workaround: Pull the native arm64 Elasticsearch image (requires internet):
docker pull docker.elastic.co/elasticsearch/elasticsearch:9.3.0
make up
This replaces the bundled amd64 image with the native arm64 variant for
Elasticsearch only. You can ignore the platform (linux/amd64) does not match
warnings for other services – they run correctly under Rosetta.
Note
This is only needed for local testing on Mac. Production deployments on Linux/amd64 servers work without any workaround.
Linux (Production)¶
No special steps required. The bundled images are linux/amd64 and run natively.
Ensure Docker Engine 24+ with Compose v2 is installed, and that the user running
Docker has permissions to load images (docker load).
Configuration¶
Passwords & Secrets¶
The bundle includes .env.example with all required variables and sensible
defaults. make init copies it to .env – you only need to replace the
<CHANGE-ME> placeholders with strong, unique passwords:
POSTGRES_PASSWORD– application databaseKC_DB_PASSWORD– Keycloak databaseKEYCLOAK_ADMIN_PASSWORD– Keycloak admin consoleREDIS_PASSWORD– cache and sessionsOIDC_CLIENT_SECRET– see note below
Important
OIDC Client Secret: The bundled Keycloak realm ships with a pre-configured
client secret. The .env.example already contains the correct value
(FnZuseq02pwaNKqsvDxL3jq4HhzPey2b). Do not change it unless you also
regenerate the secret in Keycloak (see Keycloak / OIDC Hardening below).
Changing the .env value without updating Keycloak will cause authentication
failures on startup.
Warning
Do not use the # character in password values. Docker Compose treats
# as an inline comment delimiter, which causes health check scripts
and service commands to receive truncated values. Use only alphanumeric
characters and symbols like !@$%^&*()-_=+ instead.
See Environment Variables for the complete list.
Accessing via IP Address (HTTP)¶
The default setup assumes HTTPS or localhost. If you access ubTrace over
plain HTTP using an IP address (e.g., http://192.168.1.100:7155 in a lab or
workshop network), authentication cookies will be silently rejected by browsers
because they require Secure (HTTPS-only) by default.
To fix this, set COOKIE_SECURE=false in your .env:
COOKIE_SECURE=false
# All URL env vars must use the same protocol and host:
API_SERVER_PUBLIC_URL=http://192.168.1.100:7150
FRONTEND_PUBLIC_URL=http://192.168.1.100:7155
FRONTEND_UBT_URL=http://192.168.1.100:7155
UBTRACE_API_URL=http://192.168.1.100:7150/api
KC_HOSTNAME=http://192.168.1.100:7181
OIDC_ISSUER=http://192.168.1.100:7181/realms/ubtrace
Then restart: make restart-api
Warning
Only use COOKIE_SECURE=false in trusted networks. For production
deployments, place a reverse proxy (nginx, Traefik, Caddy) in front of
ubTrace with TLS termination and keep COOKIE_SECURE=true.
Keycloak / OIDC Hardening¶
Before going live, rotate the client secret and remove the pre-created test user in Keycloak:
Log in to Keycloak admin (
http://localhost:7181,admin/ your admin password)In the left sidebar, click Manage realms, then select ubtrace (you’ll land on the “Welcome to ubtrace” page)
Go to Clients → nestjs-app → Credentials tab → Regenerate Secret
Copy the new secret into
.envasOIDC_CLIENT_SECRETGo to Users and delete the pre-created test user
Restart the API server:
make restart-api
Importing Artifacts¶
ubTrace uses a worker pipeline that automatically detects new or changed artifacts. No manual rebuild or API server restart is needed.
Option A: Pre-Built CI Output (Recommended)¶
If your CI pipeline already builds with ubt_sphinx, place the output into
input_build/:
# Using the import helper
make import-build SRC=./ci-output ORG=mycompany PROJECT=my-project VERSION=v1
# Or copy manually
mkdir -p input_build/mycompany/my-project/v1
cp -r ci-output/* input_build/mycompany/my-project/v1/
The ub-worker service polls input_build/ every 30 seconds, runs Rust
preprocessing, and loads data into the database.
Required structure (Layout A / ubt_sphinx output):
input_build/{org}/{project}/{version}/
docs/ubtrace/needs.json # required (fixed path)
docs/ubtrace/*.fjson # document files
config/ubtrace_project.toml
reports/schema_violations.json # optional
Important
The fields in config/ubtrace_project.toml (organization,
project_id, version) must exactly match the filesystem path. For
example, if the path is input_build/mycompany/my-project/v1/, the TOML
must have organization = "mycompany", project_id = "my-project",
version = "v1". A mismatch causes the worker to cache incorrect metadata,
resulting in empty API responses or wrong organization assignments.
When building with ubt_sphinx, set these in your conf.py to produce the
correct output:
ubtrace_organization = "mycompany"
ubtrace_project = "my-project"
ubtrace_version = "v1"
Tip
Use an atomic write pattern (staging directory + mv) to avoid
partial-read races with the scanner.
Option B: Sphinx Source Files¶
If you want ubTrace to build your Sphinx projects, place source files into
input_src/:
# Using the import helper
make import-src SRC=./my-sphinx-project ORG=mycompany PROJECT=my-project VERSION=v1
# Or copy manually
mkdir -p input_src/mycompany/my-project/v1
cp -r my-sphinx-project/* input_src/mycompany/my-project/v1/
The ubtrace-builder service polls input_src/ every 60 seconds, builds
with Sphinx, and writes output to input_src_build/ (internal volume). The
ub-worker then picks up the build output automatically.
Each version directory must be a self-contained Sphinx project with its own
conf.py. The builder auto-detects three layouts:
Layout |
Description |
Example |
|---|---|---|
B |
|
|
A |
|
|
C |
Flat ( |
|
Directory Overview¶
Directory |
Customer-facing |
Purpose |
|---|---|---|
|
Yes (mount) |
Sphinx source files for builder |
|
Yes (mount) |
Pre-built CI output for worker |
|
No (internal) |
Builder output (optionally mountable for debug) |
|
No (internal) |
Worker-processed data: NDJSON + Sphinx output (optionally mountable for debug) |
Day-to-Day Operations¶
Command |
Description |
|---|---|
|
Show service health |
|
Follow all logs ( |
|
Start all services |
|
Stop all services (preserves data) |
|
Restart the API server only |
|
Import pre-built CI output into |
|
Import Sphinx source into |
|
Run the offline verification script |
|
Pull latest images (requires internet or registry) |
|
Pull latest images and restart |
|
Stop services and delete all data volumes |
|
Show all available targets |
Troubleshooting¶
Images not loading¶
# Verify Docker daemon is running
docker info
# List loaded ubTrace-related images
docker images | grep -E 'postgres|redis|elasticsearch|keycloak|ub-backend|ub-frontend|ub-builder|ub-worker'
Compose config errors¶
# Validate compose file resolves all variables correctly
UBTRACE_VERSION=1.1.1 docker compose -f docker-compose-prod.yml config
Services not starting¶
# Check service logs
docker compose -f docker-compose-prod.yml logs <service-name>
# Check health status of all services
docker compose -f docker-compose-prod.yml ps
Password authentication failed (stale Docker volumes)¶
If PostgreSQL or Keycloak fail with FATAL: password authentication failed,
this is almost always caused by stale Docker volumes from a previous
installation. PostgreSQL only sets passwords on first initialization –
changing passwords in .env has no effect on existing database volumes.
Warning
docker system prune -a does NOT remove named volumes.
To fully reset and start fresh:
# Stop all services AND remove named volumes
docker compose -f docker-compose-prod.yml down -v
# Then start again
make up
Private registry TLS issues¶
If using a self-signed certificate with your private registry:
# Add your CA certificate to Docker's trusted certs
sudo mkdir -p /etc/docker/certs.d/harbor.corp.example
sudo cp ca.crt /etc/docker/certs.d/harbor.corp.example/
sudo systemctl restart docker
Advanced¶
Private Registry Mirror¶
For enterprise deployments with a private Docker registry (Harbor, Nexus, GitLab Registry, or Docker Distribution).
Step 1: Set up a private registry (if needed)
docker run -d \
--name registry \
--restart=unless-stopped \
-p 5000:5000 \
-v registry-data:/var/lib/registry \
registry:3
For production, configure TLS and authentication per your registry’s documentation.
Step 2: Mirror images
On a machine with internet access and network access to the private registry:
./scripts/offline-mirror.sh \
--registry harbor.corp.example/ubtrace \
--version 1.0.0
This pulls all 10 images from public registries, re-tags them with flattened paths under the private registry, and pushes them:
Public image |
Mirrored as |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 3: Configure ubTrace
Set IMAGE_REGISTRY in your .env file:
IMAGE_REGISTRY=harbor.corp.example/ubtrace
UBTRACE_VERSION=1.1.1
Step 4: Deploy
docker compose -f docker-compose-prod.yml up -d
With IMAGE_REGISTRY set, all images are pulled from the private registry
instead of public sources.
Verification¶
# Check images and compose config (no services started)
./scripts/offline-verify.sh --version 1.0.0
# Full verification: start services and wait for health checks
./scripts/offline-verify.sh --version 1.0.0 --up
# With private registry
IMAGE_REGISTRY=harbor.corp.example/ubtrace ./scripts/offline-verify.sh --version 1.0.0
Reference¶
Bundle Contents¶
File |
Purpose |
|---|---|
|
Production Compose configuration |
|
Environment variable template |
|
Convenience targets ( |
|
This deployment guide |
|
Load images from bundle into Docker |
|
Verify images, compose config, health checks |
|
Create offline bundle (pull + save + package) |
|
Re-tag and push images to a private registry |
|
Pre-configured Keycloak realm (production) |
|
Sphinx source directory structure guide |
|
Pre-built artifact directory structure guide |
Environment Variables¶
All customer-facing variables. IMAGE_REGISTRY and UBTRACE_VERSION control
image sources; the rest configure services. When IMAGE_REGISTRY is empty or
unset, images are pulled from their default public registries.
# Image Registry (leave empty for public Docker registries)
IMAGE_REGISTRY=
# Version
UBTRACE_VERSION=1.1.1
# PostgreSQL (Application Database)
POSTGRES_DB=ubtrace
POSTGRES_USER=ubtrace
POSTGRES_PASSWORD=<your-secure-password>
# PostgreSQL (Keycloak Database)
KC_DB_DATABASE=keycloak
KC_DB_USERNAME=keycloak
KC_DB_PASSWORD=<your-secure-password>
# Keycloak (Authentication Server)
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=<your-secure-password>
KC_HOSTNAME=http://localhost:7181
KEYCLOAK_PORT=7181
# Redis (Cache & Sessions)
REDIS_PASSWORD=<your-secure-password>
REDIS_MAXMEMORY=256mb
BACKCHANNEL_LOGOUT_TTL=15m
# Cookie Security (for HTTP / IP-based deployments)
COOKIE_SECURE=true # Set to false for HTTP-only access (see above)
# API Server (NestJS Backend)
API_SERVER_PORT=7150
API_SERVER_PUBLIC_URL=http://localhost:7150
CLOUD_PROVIDER=local
# OIDC Configuration
OIDC_ISSUER=http://localhost:7181/realms/ubtrace
OIDC_CLIENT_ID=nestjs-app
# IMPORTANT: Must match the secret in Keycloak. Do not change unless you
# also regenerate it in Keycloak (see "Keycloak / OIDC Hardening").
OIDC_CLIENT_SECRET=FnZuseq02pwaNKqsvDxL3jq4HhzPey2b
# Frontend (Next.js)
FRONTEND_PORT=7155
FRONTEND_PUBLIC_URL=http://localhost:7155
FRONTEND_UBT_URL=http://localhost:7155
UBTRACE_API_URL=http://localhost:7150/api
# Legacy: traceability tree URL (keep default unless customized)
FRONTEND_VIS_TREE_URL=http://localhost:7140
# Additional CORS origins (comma-separated URLs, optional)
# CORS_EXTRA_ORIGINS=https://custom-app.example.com,https://other.example.com
CORS_EXTRA_ORIGINS=
# Elasticsearch (Search & Analytics)
ELASTICSEARCH_PORT=7184
# Ingest Endpoint (Optional)
INGEST_MAX_FILE_SIZE=524288000 # Max upload size in bytes (default: 500MB)
# Worker Pipeline
WORKER_POLL_INTERVAL_MS=30000 # Worker scanner polling interval (milliseconds)
BUILDER_POLL_INTERVAL=60 # Builder Sphinx build polling interval (seconds)
# License (required)
UBTRACE_LICENSE_FILE=/data/licenses/license.skm
See docker-compose-prod.yml for all available environment variables including
advanced tuning parameters.
License Configuration¶
A valid license is required to use ubTrace. Without one, all feature API
endpoints return 403 Forbidden. Only health checks, authentication, license
status, and endpoint-availability endpoints remain accessible.
Contact support@useblocks.com for license activation.
License Status Values¶
Status |
Meaning |
Feature Endpoints |
|---|---|---|
|
License is active and not expired |
Allowed |
|
No license configured |
Blocked (403) |
|
License was valid but has passed its expiry |
Blocked (403) |
|
License is revoked, blocked, or malformed |
Blocked (403) |
Offline License Activation¶
For air-gapped deployments, ubTrace supports offline license validation via
.skm activation files:
Place your
.skmlicense file in thelicenses/directory next todocker-compose-prod.yml:licenses/ license.skm
Note
The
.skmfile must be in signed activation format containinglicenseKey(base64-encoded),signature, andresultfields. Contact support@useblocks.com if your activation file uses a different format.Configure the license in your
.env:UBTRACE_LICENSE_FILE=/data/licenses/license.skm
The
licenses/directory is mounted read-only into the API server container at/data/licenses/.Restart the API server:
make restart-api
Verify License Status¶
Check the current license status via the API:
curl -s http://localhost:7150/api/v1/license/status | python3 -m json.tool
Example responses:
Valid license:
{
"status": "valid",
"tier": "licensed",
"expiresAt": "2026-12-31T23:59:59.000Z",
"daysUntilExpiration": 288,
"features": ["all-organizations", "full-access"]
}
No license configured:
{
"status": "unlicensed",
"tier": "free",
"features": []
}
Expired license:
{
"status": "expired",
"tier": "free",
"expiresAt": "2025-01-01T00:00:00.000Z",
"features": []
}
Note
License changes take effect after restarting the API server:
make restart-api