Local development environment for the Alpiq BESS platform, managed from the optimization-universe-iac/local/ directory. Runs Mando, algorithm services, and all infrastructure locally via Docker Compose.
Branch
Work done on andras/BE-1599. Key commit: 7a2b45a (2026-03-05) — “feat: interactive runner for local compose + docker layer cache + readme”
Quick Start
CLI Replacement
The Python run.py is being replaced by mando-cli, a Rust YAML-driven workflow orchestrator. Use mando run menu instead.
# New way (mando-cli)cd mando-cli && cargo run -- run menu# Legacy (Python)cd optimization-universe-iac/localpython3 run.py
Run Modes
Mode
Services
Algos
Use Case
1
postgres, flyway, samba, wiremock
—
Database/infra testing
2
Mode 1 + mando + trader-dashboard
Mocked via WireMock stubs
Quick local dev
3
Mode 2 + forecast + optimization
Real algo services (WireMock proxy)
Full integration
Mando can be built locally from source or pulled from GitLab Container Registry.
Compose Files
File
Purpose
compose.yaml
Base: infrastructure + mando + trader-dashboard
compose.algos.yaml
Overlay: adds real forecast + optimization services
compose.otel.yaml
Overlay: adds Jaeger all-in-one for local tracing
compose.registry.yaml
Overlay: pulls pre-built mando from GitLab registry
# Infrastructure onlydocker compose up -d# Mando + mocked algos (local build)docker compose --profile app up -d --build# Mando from registryMANDO_TAG=1.4.11-dev.123.abc docker compose -f compose.yaml -f compose.registry.yaml --profile app up -d# Full stack with real algosdocker compose -f compose.yaml -f compose.algos.yaml --profile app up -d --build
The Gurobi license volume mount had :ro which prevented the Gurobi runtime from writing temp files. Fix: removed :ro from the volume mount. Additionally, the GUROBI_LIC environment variable is now set with the license file content directly (alternative to volume-mounting the license file).
WireMock — Universal Mock
All mando external calls route through WireMock. Replaces all previous mocks (MockServer, Volue EMS Rust mock).
Proxy Architecture
Mando never talks directly to algo services. Algo host env vars point to WireMock network aliases:
Mando ──→ forecast-algo:8080 ──→ WireMock ──→ (stub OR proxy to forecast:5000)
Mando ──→ optimization-algo:8080 ──→ WireMock ──→ (stub OR proxy to optimization:6000)
Mando Env Var
WireMock Alias
Real Service (proxied)
MANDO_FORECAST_ALGO_HOST
forecast-algo:8080
forecast:5000
MANDO_FORECAST_DA_ALGO_HOST
forecast-da-algo:8080
forecast:5000
MANDO_FORECAST_AS_ALGO_HOST
forecast-as-algo:8080
forecast:5100
MANDO_OPTIMIZATION_ALGO_HOST
optimization-algo:8080
optimization:6000
Without overlay: WireMock serves stubs (priority 80).
With compose.algos.yaml: Proxy mappings (priority 1) forward to real services via Host header matching.
Proxy-Algos Mappings
Read-Only Filesystem Fix (2026-03-15)
WireMock proxy-algos files were originally mounted as a nested volume under mappings/. This caused issues because WireMock’s filesystem was read-only. Fix: proxy-algos JSON files are now copied into the main mappings/ directory instead of being volume-mounted separately.
Stub Mappings (19 total)
#
Service
Endpoint
10
OnePassport
POST /system/token
11
Volue ATP
POST /auth/token
20
Metis
GET /timeseries/{dp} (templated by dp_name)
30
MDR
GET /documents/{path}
40-41
Position Manager
GraphQL queries/mutations
50-53
OPL
Register, strategy CRUD, order create
60-62
Volue ATP
Orderbooks, trading stats, param templates
70-73
Volue EMS
Auth, spot data, timeseries read/write
80-81
Algos
/run and /version (fallback stubs)
Response Templating
Metis:bodyFileName: "metis/{{request.pathSegments.[1]}}.json" — drop fixtures in __files/metis/
Static JWTs with exp: 4102444800 (year 2100). Mando’s token caching works normally.
Entra ID Limitation
Entra ID auth (OPL, Position Manager) is hardcoded to login.microsoftonline.com — cannot route through WireMock. Dummy env vars set so mando starts, but PM/OPL flows fail at auth.
Configuration Layers
Config loads in order (last wins):
Repo .env files — Python services load via env_file
Compose environment — Docker-specific overrides only (hostnames)
Mando’s .cargo/config.toml provides default values for all env vars. The mando-entrypoint.sh script parses this file and exports variables to the environment. Compose environment block takes precedence over .cargo/config.toml values.
The Alpiq NTLM proxy requires http:// (not https://) for the proxy URL. Using https:// causes SSL_ERROR_SYSCALL because the proxy expects a plain HTTP CONNECT tunnel.
# WRONG — TLS to proxy itself, causes SSL_ERROR_SYSCALLcurl -x https://alpiq-ntlm.taild4189d.ts.net:3128 https://target.alpiq.services# RIGHT — plain HTTP to proxy, HTTPS to targetcurl -x http://alpiq-ntlm.taild4189d.ts.net:3128 https://target.alpiq.services
Tailscale NTLM Proxy
Internal Alpiq services (nexus.cicdtst.alpiq.services) are behind a corporate proxy accessible via Tailscale:
gitlab.com is in NO_PROXY — reachable directly without proxy
nexus.cicdtst.alpiq.services — requires proxy (Nexus for eigeropt package)
Gurobi license server — requires proxy for activation/validation
Services Requiring Proxy
Service
Internal Dependency
Why
Optimization
eigeropt from Nexus
poetry install during Docker build
Optimization
Gurobi license
License validation at runtime
Forecast
py-mando from GitLab
poetry install during Docker build (uses direct GitLab, no proxy needed)
Docker Build Networking
Proxy env vars are passed as build args via x-proxy-args anchor in compose.algos.yaml. The .env file must have proxy vars uncommented for builds requiring internal registry access.
Compose maps 4213:4214 → access at http://localhost:4213.
Key Design Decisions
WireMock as universal mock — single container replaces all previous mocks; stubs are JSON files
Config from repo files — services read their own config; compose only overrides hostnames
Mando decoupled from algos — always talks to WireMock aliases; routing via proxy mappings
Profiles — infra runs by default; app services require --profile app
dockerfile_inline — mando build defined inline in compose.yaml
OpenTelemetry / Jaeger Tracing
Local trace visualization via Jaeger all-in-one. Zero code changes — mando already exports OTLP traces + metrics via the opentelemetry-otlp crate, just disabled locally by default.
Quick Start
# Start with tracing enabled (add OTel overlay to any compose command)cd optimization-universe-iac/localdocker compose -f compose.yaml -f compose.otel.yaml --profile app up -d --build# Full stack with algos + tracingdocker compose -f compose.yaml -f compose.algos.yaml -f compose.otel.yaml --profile app up -d --build# Or via mando-cli recipe (OTel toggle in start recipes)cargo run -- run mando/start -e enable_otel=truecargo run -- run mando/start-with-algos -e enable_otel=true -e mando_source=local
Jaeger UI: http://localhost:16686 — traces appear under service bess-os-service-mando.
How It Works
compose.otel.yaml is a post-build overlay that layers Jaeger on top of whichever compose files you already have. It adds Jaeger all-in-one (OTLP on port 4318) and overrides mando’s env:
The OTel overlay is always applied last — after compose.yaml, any algo overlay, or registry overlay.
The start recipes (mando/start.yaml, mando/start-with-algos.yaml) now include an OTel toggle choice that conditionally adds -f compose.otel.yaml to the compose command.
Metrics — OTLP HTTP to /v1/metrics (30s intervals)
HTTP client spans — http.client.duration histogram via Tower middleware
OTEL_EXPORT_DISABLED Gotcha
OTEL_EXPORT_DISABLED was removed from .cargo/config.toml. The mando code checks env::var("OTEL_EXPORT_DISABLED").is_err() — meaning any set value (even "false") disables export. The variable must be completely unset for export to work. The compose overlay handles this correctly.
RUST_LOG Must Include OTel Spans
RUST_LOG must include axum_tracing_opentelemetry=info (or broader) for spans to not be filtered out at the subscriber level. Without this, traces silently never reach Jaeger.
Jaeger Must Be in NO_PROXY
jaegermust be listed in NO_PROXY in .env, otherwise the corporate NTLM proxy intercepts OTLP traffic from mando to jaeger within the Docker network, and traces silently fail to arrive.
The Python algorithm services (forecast, optimization) do not have OpenTelemetry SDK integration yet. Only mando (Rust) exports traces. Adding OTel to Python services is a future enhancement.
Latency breakdown per external call (Metis, Volue EMS, OPL, etc.)
Flow execution traces with span hierarchy
HTTP client duration histograms
Verified: bess-os-service-mando service visible in Jaeger with traces flowing
WireMock Integration
WireMock 3.x does not have native OTel support. The wiremock-otel-extension JAR can be added later for end-to-end trace propagation through the proxy layer. Currently traces show mando’s outgoing calls but not WireMock’s internal routing.
Production
Production uses Datadog Agent as the OTLP collector on port 4318 (ECS Fargate sidecar). The same OTEL_EXPORTER_OTLP_ENDPOINT env var controls the target.
Useful Commands
# Trigger D1/PI1 flowcurl http://localhost:8080/test/bess/d1_pi1_flow1# WireMock admincurl -s http://localhost:1080/__admin/mappings | python3 -m json.tool# Rebuild single servicedocker compose --profile app up --build forecast# Teardown (remove volumes)docker compose --profile app down -v
Pre-built Docker images can be exported to optimization-universe-iac/local/image-cache/ for offline distribution to team members without registry access:
# Export imagesdocker save -o image-cache/mando.tar mando:latestdocker save -o image-cache/forecast.tar forecast:latest# Import on another machinedocker load -i image-cache/mando.tar
This is managed by mando-cli recipes that handle export/import automatically.