FaceKom Infrastructure

vuer_docker — Container Orchestration

Docker Compose Files (15+)

FileServices
dev.ymlCore infra: RabbitMQ, PostgreSQL, nginx_proxy, syslog-ng
services.ymlpdfservice, nyilvantarto
vuer-oss.ymlvuer_oss
vuer-css.ymlvuer_css
esign-oss.ymlesign (backend)
esign-css.ymlesign (frontend)
vuer-cv.ymlvuer_cv (ML service)
vuer-cv-gpu.ymlvuer_cv with GPU
vuer-cv-dev.ymlvuer_cv development
janus-dev.ymlJanus WebRTC gateway (dev)
portal-css.ymlPortal frontend
facekom-library.ymlFaceKom library service
testenv.ymlAll-in-one test environment
otel.ymlOpenTelemetry (Grafana, Loki, Prometheus, Tempo)
minio.ymlMinIO object storage
mssql.ymlMicrosoft SQL Server
mysql.ymlMySQL
oracledb19.ymlOracle DB 19
clamav.ymlClamAV antivirus
sftp.yamlSFTP server
toxiproxy.yamlNetwork fault injection
macos-vuer-dev.ymlmacOS development setup

Environment Variables (.env)

VariableValue
Projectfacekom-devel
VUER2026.1.UBI9
ESIGN2024.4
VUERCV4.6.2
RabbitMQ4.1.4
PostgreSQL16.6
Dev UID/GID1000/1000

Container Registry

Images pushed to: harbor.techteamer.com/${PROJECT_NAME}/

Network Architecture

Host Networking

All containers use network_mode: "host" — they share the host’s network namespace. Services communicate via localhost ports. This simplifies setup but reduces isolation.

Port Map

graph TB
    subgraph "External Access (:443)"
        NginxProxy[Nginx Proxy]
    end

    subgraph "vuer_oss"
        OSS_nginx[:20080 Nginx]
        OSS_socket[:10080 Socket.IO]
        OSS_http[:10081 Express]
    end

    subgraph "vuer_css"
        CSS_nginx[:30080 Nginx]
        CSS_socket[:10082 Socket.IO]
        CSS_http[:10083 Express]
    end

    subgraph "esign_oss"
        EOSS_nginx[:20180 Nginx]
        EOSS_socket[:10180 Socket.IO]
        EOSS_web[:10181 Admin UI]
        EOSS_ext[:10182 External API]
    end

    subgraph "esign_css"
        ECSS_nginx[:30180 Nginx]
        ECSS_web[:10183 Express]
        ECSS_socket[:10184 Socket.IO]
    end

    subgraph "Other"
        CV[:40080 vuer_cv]
        Portal[:30380 portal_css]
        Library[:50080 facekom_library]
        RabbitMQ[:5671 AMQPS / :15672 Mgmt]
        PG[:5432 PostgreSQL]
    end

    NginxProxy --> OSS_nginx
    NginxProxy --> CSS_nginx
    NginxProxy --> EOSS_nginx
    NginxProxy --> ECSS_nginx
    NginxProxy --> CV
    NginxProxy --> Portal
    NginxProxy --> Library

Full Port Reference

ServicePorts
vuer_oss (nginx)20080 (HTTP), 10080 (Socket.IO), 10081 (Express)
vuer_css (nginx)30080 (HTTP), 10082 (Socket.IO), 10083 (Express)
esign_oss (nginx)20180 (socket), 10180 (socket), 10181 (web), 10182 (ext API)
esign_css (nginx)30180 (proxy), 10183 (web), 10184 (socket)
vuer_cv (nginx)40080
portal_css30380
facekom_library50080
RabbitMQ5671 (AMQPS), 15672 (management)
PostgreSQL5432
Nginx Proxy443 (HTTPS), 80 (redirect)

Nginx Proxy Configuration

The main nginx_proxy routes HTTPS traffic by subdomain:

PatternTarget
oss-*.facekomdev.netvuer_oss (:20080)
esign-oss-*.facekomdev.netesign_oss (:20180)
esign-api-*.facekomdev.netesign_oss external (:10182 via nginx)
esign-css-*.facekomdev.netesign_css (:30180)
*portal-*.facekomdev.netportal_css (:30380)
*css-*.facekomdev.netvuer_css (:30080)
*cv-*.facekomdev.netvuer_cv (:40080)
library-*.facekomdev.netfacekom_library (:50080)

All HTTPS with self-signed certs from /workspace/cert/.

Supervisor

All containers use supervisord as process manager, running:

  • The main application process(es)
  • Nginx (reverse proxy per container)
  • Redis (where needed, e.g., vuer_oss)

Per-Container Redis

vuer_oss runs its own Redis instance inside the container instead of using a shared service. This simplifies deployment but duplicates infrastructure.

Volume Mounts (Common Pattern)

- /workspace/<service>:/workspace/<service>          # Source code
- ./workspace/cert/vuer_mq_cert:/workspace/vuer_mq_cert  # RabbitMQ TLS certs
- ./workspace/log/<service>:/var/log                 # Logs
- /workspace/cert:/workspace/cert                    # SSL certs

Base Images

ServiceBase Image
vuer_ossUBI9 minimal + Node.js 24 + Java 11 + Redis + Nginx + Chromium + Oracle Instant Client + Janus
esign_oss/cssCustom build with Node.js
pdfserviceUBI8 minimal + Java 11
nyilvantartoCustom with Node.js
RabbitMQCustom build with TLS support

DNS Resolution Chain (Development)

Browser -> macOS Resolver -> dnsmasq -> Tailscale IP -> Remote Server
  1. macOS Resolver (/etc/resolver/):

    • /etc/resolver/facekomdev.net nameserver 127.0.0.1
    • /etc/resolver/test nameserver 127.0.0.1
  2. dnsmasq (Homebrew v2.92 or containerized):

    • facekomdev.net 100.103.48.49
    • lederera.test 100.103.48.49
    • Local config: /opt/homebrew/etc/dnsmasq.conf
  3. Tailscale: IP 100.103.48.49 routes to remote development server

Per-Developer DNS

Each developer gets subdomains: {username}.facekomdev.net

  • css-{user}.facekomdev.net — vuer_css
  • oss-{user}.facekomdev.net — vuer_oss
  • cv-{user}.facekomdev.net — vuer_cv
  • api-{user}.facekomdev.net — API

Containerized dnsmasq

Located at /Users/levander/levandor/infra/dnsmasq/:

SettingLocalContainer
Listen address127.0.0.10.0.0.0
Upstream DNSSystem resolv.confCloudflare 1.1.1.1, Google 8.8.8.8
Daemon modeBackgroundForeground (--no-daemon)
DNS hygieneDefaultdomain-needed, bogus-priv enabled
CacheDefault1000 entries

Observability Stack (otel.yml)

graph LR
    Services[All Services] -->|logs| Promtail
    Services -->|metrics| Prometheus
    Services -->|traces| OTelCollector[OpenTelemetry Collector]

    Promtail --> Loki
    OTelCollector --> Tempo
    OTelCollector --> Prometheus

    Loki --> Grafana
    Prometheus --> Grafana
    Tempo --> Grafana
ComponentPurpose
GrafanaDashboards and visualization
LokiLog aggregation
PrometheusMetrics collection
TempoDistributed tracing
PromtailLog shipping agent
OpenTelemetry CollectorTelemetry pipeline

Janus WebRTC Gateway

  • Built from source (commit b02e09ed)
  • Used for real-time video/audio in identity verification
  • Bundled inside the vuer_oss container image
  • Recording conversion via janus-pp-rec utility

CoTURN (TURN Server)

  • WebRTC NAT traversal
  • Ports: 80, 443, 3478
  • Required for customers behind restrictive firewalls

TLS Certificates

Development Certs (/workspace/cert/)

  • dev.crt / dev.key / dev.csr — TLS certificates
  • dhparam.pem — Diffie-Hellman parameters
  • Managed in its own git repo

RabbitMQ TLS Certs

Generated via scripts in vuer_docker/workspace/cert/vuer_mq_cert/:

  • generate_ca.sh — CA certificate generation
  • generate_server.sh — Server certificate
  • generate_client.sh — Client certificate

TSA Certificates (e-Szigno)

Hungarian e-Szigno test TSA certificates mounted into containers:

  • Test Root CA 2017
  • Test TSA CA 2017
  • Microsec Test Root CA 2008

Container Runtime

  • Podman 5.7.1 (not Docker)
  • VM: libkrun hypervisor (macOS optimized)
  • Resources: 8 CPUs, 8.7 GiB RAM, 93 GiB disk
  • Machine: podman-machine-default

Remote Server

PropertyValue
OSUbuntu 22.04.5 LTS
Workspace/workspace/ contains all vuer projects
AccessSSH + Tailscale mesh network
Process managementSupervisor (auto-restarts on crash)

Local Development Paths (levander)

PurposePath
FaceKom source (SSHFS mount)/Users/levander/coding/mnt/Facekom/
FaceKom source (local mount)/Users/levander/coding/facekom/
Infrastructure code/Users/levander/levandor/infra/
dnsmasq container/Users/levander/levandor/infra/dnsmasq/
Obsidian docs/Users/levander/levandor_obsidian/projects/facekom/