Deployment¶
This guide deploys ReefLogic on your own Linux host using Docker Compose. The stack runs three containers — PostgreSQL with PostGIS, Keycloak (OIDC), and the ReefLogic admin gRPC server — and is fronted by a host-installed HAProxy that terminates TLS and routes by SNI.
You will end the guide with a working deployment reachable at two public hostnames of your choice (for example admin.example.com for the gRPC API and auth.example.com for the Keycloak login endpoint).
What you will build¶
| Component | Role | Where it runs |
|---|---|---|
db |
PostgreSQL 18 + PostGIS. One cluster hosting two databases and two roles. | Docker (private network) |
keycloak |
OIDC provider. The ReefLogic realm is imported on first boot. |
Docker → 127.0.0.1:9092 |
admin-server |
ReefLogic admin gRPC service. Runs Flyway schema migrations on each start. | Docker → 127.0.0.1:9999 |
haproxy |
TLS termination, ACME, SNI-based routing to the two loopback ports above. | Host (not Docker) |
Only HAProxy publishes ports 80 and 443 to the outside world. The two backend containers bind to loopback only, so the only client that can reach them is the host HAProxy.
Prerequisites¶
Before starting, confirm you have:
- A 64-bit Linux host (x86_64 or ARM64) with Docker Engine 24+ and the Docker Compose v2 plugin.
- HAProxy installed on the host with TLS termination already working for at least one domain you control. The compose stack itself does not ship HAProxy.
- Two public DNS records that resolve to this host, one per service:
- admin gRPC endpoint, for example
admin.example.com - Keycloak login endpoint, for example
auth.example.com
- admin gRPC endpoint, for example
- A TLS certificate at
/etc/haproxy/certs/your-cert.pem(combined fullchain + private key in a single PEM, which is HAProxy's expected format) that includes both hostnames as Subject Alternative Names. - Credentials for the ReefLogic container registry to pull the two custom images (
git.reeflogic.org/reeflogic/admin-serverandgit.reeflogic.org/reeflogic/keycloak). A read-only access token is sufficient. - Approximately 4 GB RAM, 2 vCPU, and 20 GB free disk for a small deployment.
Authenticate Docker against the ReefLogic registry once before pulling images:
Step 1 — Lay out the deployment directory¶
Pick a directory on the host where the four configuration files will live. Throughout this guide we use ~/reeflogic. The final layout will be:
~/reeflogic/
├── compose.yml # Docker Compose stack definition (Step 2)
├── .env # secrets + per-deployment values (Step 3)
├── reeflogic_admin.conf # admin server runtime configuration (Step 4)
└── pg/
└── init.d/
└── 01_create_dbs.sh # Postgres bootstrap script (Step 5)
Create the directory tree:
The Compose stack expects all four files at exactly these paths relative to compose.yml. Missing files cause silent failures (in particular, an empty pg/init.d/ directory makes Postgres start without ever creating the application roles).
Step 2 — Compose stack: compose.yml¶
Paste the following into ~/reeflogic/compose.yml. It defines all three containers, the private network, and the persistent volumes. Every ${...} placeholder is read from .env at docker compose up time.
# ReefLogic appliance compose stack.
#
# `docker compose up -d` brings the full stack: postgres+postgis (two DBs
# in one cluster), Keycloak (with the ReefLogic realm imported on first
# start), and the ReefLogic admin server (which runs Flyway migrate
# before starting).
#
# TLS termination, ACME, and SNI routing live on the host HAProxy
# (configured in Step 6). admin-server publishes on 127.0.0.1:9999
# (gRPC, h2) and keycloak on 127.0.0.1:9092 (HTTP/1.1); the host
# HAProxy reaches both over loopback. The host-side ports differ from
# the container-internal ports (9090 and 8080) because those are
# frequently already in use on the host.
#
# All variables come from .env (see Step 3).
name: reeflogic-appliance
networks:
internal:
driver: bridge
volumes:
pg_data:
admin_server_logs:
services:
db:
image: postgis/postgis:${POSTGIS_TAG}
restart: unless-stopped
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${POSTGRES_SUPER_PASSWORD}
POSTGRES_DB: postgres
REEFLOGIC_DB_USER: ${REEFLOGIC_DB_USER}
REEFLOGIC_DB_PASSWORD: ${REEFLOGIC_DB_PASSWORD}
REEFLOGIC_DB_NAME: ${REEFLOGIC_DB_NAME}
KEYCLOAK_DB_USER: ${KEYCLOAK_DB_USER}
KEYCLOAK_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
KEYCLOAK_DB_NAME: ${KEYCLOAK_DB_NAME}
volumes:
# Postgres 18+ images (postgis/postgis:18.x-*) place the cluster
# in a major-version subdirectory like /var/lib/postgresql/18/docker
# and refuse to start with a volume mounted directly at
# /var/lib/postgresql/data. Mount one level up at
# /var/lib/postgresql so the image manages the version subdir.
- pg_data:/var/lib/postgresql
- ./pg/init.d:/docker-entrypoint-initdb.d:ro
networks: [internal]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 20
keycloak:
image: ${REEFLOGIC_KEYCLOAK_IMAGE}:${REEFLOGIC_VERSION}
restart: unless-stopped
depends_on:
db:
condition: service_healthy
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://db:5432/${KEYCLOAK_DB_NAME}
KC_DB_USERNAME: ${KEYCLOAK_DB_USER}
KC_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
KC_HOSTNAME: ${KEYCLOAK_DOMAIN}
KC_HTTP_ENABLED: "true"
KC_PROXY_HEADERS: xforwarded
KC_HEALTH_ENABLED: "true"
KC_BOOTSTRAP_ADMIN_USERNAME: ${KC_BOOTSTRAP_ADMIN_USERNAME}
KC_BOOTSTRAP_ADMIN_PASSWORD: ${KC_BOOTSTRAP_ADMIN_PASSWORD}
REEFLOGIC_ADMIN_USERNAME: ${REEFLOGIC_ADMIN_USERNAME}
REEFLOGIC_ADMIN_PASSWORD: ${REEFLOGIC_ADMIN_PASSWORD}
REEFLOGIC_ADMIN_REDIRECT_URIS_JSON: ${REEFLOGIC_ADMIN_REDIRECT_URIS_JSON}
# Loopback-only: the host HAProxy is the only client.
ports:
- "127.0.0.1:9092:8080"
networks: [internal]
healthcheck:
test: ["CMD-SHELL", "exec 3<>/dev/tcp/127.0.0.1/9000 && echo -e 'GET /health/ready HTTP/1.1\\nhost: localhost\\n' >&3 && cat <&3 | grep -q '200 OK'"]
interval: 10s
timeout: 5s
retries: 30
admin-server:
image: ${REEFLOGIC_ADMIN_SERVER_IMAGE}:${REEFLOGIC_VERSION}
restart: unless-stopped
depends_on:
db:
condition: service_healthy
environment:
REEFLOGIC_DB_HOST: db
REEFLOGIC_DB_PORT: 5432
REEFLOGIC_DB_NAME: ${REEFLOGIC_DB_NAME}
REEFLOGIC_DB_USER: ${REEFLOGIC_DB_USER}
REEFLOGIC_DB_PASSWORD: ${REEFLOGIC_DB_PASSWORD}
REEFLOGIC_OIDC_ISSUER: https://${KEYCLOAK_DOMAIN}/realms/ReefLogic
volumes:
# Bind-mount the operator-supplied gRPC server config at the path
# the admin server resolves by default. No env override is needed.
- ./reeflogic_admin.conf:/var/lib/reeflogic/.reeflogic/reeflogic_admin.conf:ro
# Persist the rotating spdlog files written at
# /var/log/reeflogic/reef-server-admin.log.
- admin_server_logs:/var/log/reeflogic
# Loopback-only: the host HAProxy is the only client.
ports:
- "127.0.0.1:9999:9090"
networks: [internal]
Step 3 — Environment file: .env¶
Paste the following into ~/reeflogic/.env. Replace every blank value before bringing the stack up:
# Image tags ----------------------------------------------------------------
# REEFLOGIC_VERSION pins the admin-server and keycloak image tags (both are
# tagged with the package version of the build that produced them).
REEFLOGIC_VERSION=
REEFLOGIC_ADMIN_SERVER_IMAGE=git.reeflogic.org/reeflogic/admin-server
REEFLOGIC_KEYCLOAK_IMAGE=git.reeflogic.org/reeflogic/keycloak
POSTGIS_TAG=18.3-3.5
# Public hostnames ----------------------------------------------------------
# These feed Keycloak's KC_HOSTNAME and the realm issuer URL the admin
# server advertises. The host HAProxy (Step 6) must also have ACLs and
# backends for both names, and its certificate must cover them as SANs.
ADMIN_DOMAIN=admin.example.com
KEYCLOAK_DOMAIN=auth.example.com
# Postgres ------------------------------------------------------------------
POSTGRES_SUPER_PASSWORD=
REEFLOGIC_DB_USER=reeflogic
REEFLOGIC_DB_PASSWORD=
REEFLOGIC_DB_NAME=reeflogic
KEYCLOAK_DB_USER=keycloak
KEYCLOAK_DB_PASSWORD=
KEYCLOAK_DB_NAME=keycloak
# Keycloak ------------------------------------------------------------------
KC_BOOTSTRAP_ADMIN_USERNAME=admin
KC_BOOTSTRAP_ADMIN_PASSWORD=
# First admin user inside the ReefLogic realm.
REEFLOGIC_ADMIN_USERNAME=reeflogic
REEFLOGIC_ADMIN_PASSWORD=
# JSON array — must be valid JSON since it is interpolated verbatim into
# the realm import file.
REEFLOGIC_ADMIN_REDIRECT_URIS_JSON=["http://127.0.0.1:*/*"]
Generate the secrets with openssl:
# Six values to fill in: four passwords + two secrets used later in Step 4.
openssl rand -hex 16 # for POSTGRES_SUPER_PASSWORD
openssl rand -hex 16 # for REEFLOGIC_DB_PASSWORD
openssl rand -hex 16 # for KEYCLOAK_DB_PASSWORD
openssl rand -hex 16 # for KC_BOOTSTRAP_ADMIN_PASSWORD
openssl rand -hex 16 # for REEFLOGIC_ADMIN_PASSWORD
Set REEFLOGIC_VERSION to the build version you want to deploy (the value the registry tags the images with; check with your release manager or the registry web UI). Finally, lock down the file mode — it now holds five secrets:
Step 4 — Admin server runtime config: reeflogic_admin.conf¶
The admin server reads its runtime configuration from a libconfig file that Compose bind-mounts read-only into the container. Paste the following into ~/reeflogic/reeflogic_admin.conf and replace every CHANGE-ME field:
server_admin: {
// One-time deployment initializer service. Set to false after the
// first successful Reef-init run if you want to disable it.
enable_reef_init_service = true;
// CHANGE-ME. Shared secret used by the one-time init flow; distribute
// out-of-band to whoever runs the initial bootstrap.
reef_init_auth_shared_secret = "<30+ random chars>";
secret: {
// CHANGE-ME. 30+ random ASCII characters. Seeds the at-rest
// field-encryption key derivation.
salt = "<openssl rand -hex 15 output>";
// CHANGE-ME. 120+ random ASCII characters. Seeds the global
// server secret key.
secret = "<openssl rand -hex 60 output>";
};
db: {
host = "db"; // matches the compose service name
port = "5432";
dbname = "reeflogic"; // must match REEFLOGIC_DB_NAME in .env
user = "reeflogic"; // must match REEFLOGIC_DB_USER in .env
password = "<must match REEFLOGIC_DB_PASSWORD from .env>";
options = "-c search_path=reeflogic";
sslmode = "prefer";
application_name = "reeflogic_server_admin";
connect_timeout = "10";
pool_size = 8;
max_queue_size = 2048;
};
grpc: {
// Container-internal listen address; do not change. The compose
// stack maps this to 127.0.0.1:9999 on the host.
listen_address = "0.0.0.0:9090";
limits: {
max_receive_message_bytes = 8388608;
max_send_message_bytes = 8388608;
};
timeouts: {
request_deadline_ms = 30000;
};
// TLS terminates at the host HAProxy; the loopback hop stays
// cleartext h2.
tls: {
mode = "insecure";
};
};
};
Generate the two secret.* values:
openssl rand -hex 15 # for secret.salt (30 hex chars)
openssl rand -hex 60 # for secret.secret (120 hex chars)
openssl rand -hex 16 # for reef_init_auth_shared_secret
The db.password value must equal REEFLOGIC_DB_PASSWORD in .env byte-for-byte. A mismatch causes the admin server to restart-loop with FATAL: password authentication failed for user "reeflogic".
Keep the file mode at 0644 — the in-container service user runs under a UID assigned at image-build time that does not match your host UID, so a restrictive mode would make the file unreadable inside the container. Protect the file via the containing directory's mode instead:
chmod 0644 ~/reeflogic/reeflogic_admin.conf
chmod 0700 ~/reeflogic # protect the directory holding both secrets files
Configuration reference¶
The table below lists every key in the sample above, with its purpose and the default value shipped in this guide. Required keys must be set before the admin server will accept connections; the rest are sensible defaults that you can override only if you have a reason.
| Key | Description | Default in this guide | Required |
|---|---|---|---|
enable_reef_init_service |
Whether to expose the one-time Reef-init bootstrap RPC. Set to false after the first successful init. |
true |
No |
reef_init_auth_shared_secret |
Shared secret authenticating the one-time Reef-init bootstrap call. Distribute out-of-band. | CHANGE-ME |
Yes |
secret.salt |
Seed for the at-rest field-encryption key. Must be 30+ random ASCII characters. | CHANGE-ME |
Yes |
secret.secret |
Seed for the server's global secret key. Must be 120+ random ASCII characters. | CHANGE-ME |
Yes |
db.host |
PostgreSQL hostname. For the Compose stack this is the service name db. |
db |
No |
db.port |
PostgreSQL port. | 5432 |
No |
db.dbname |
Application database name. Must match REEFLOGIC_DB_NAME in .env. |
reeflogic |
No |
db.user |
Application role name. Must match REEFLOGIC_DB_USER in .env. |
reeflogic |
No |
db.password |
Application role password. Must equal REEFLOGIC_DB_PASSWORD in .env byte-for-byte. |
CHANGE-ME |
Yes |
db.options |
libpq options string. Sets the schema search_path so SQL references resolve without explicit schema. |
-c search_path=reeflogic |
No |
db.sslmode |
libpq SSL mode (disable / prefer / require / verify-ca / verify-full). |
prefer |
No |
db.application_name |
application_name reported to PostgreSQL and shown in pg_stat_activity. |
reeflogic_server_admin |
No |
db.connect_timeout |
libpq connect timeout in seconds (passed as string). | "10" |
No |
db.pool_size |
Number of concurrent libpq connections held open by the server's PgUtility pool. | 8 |
No |
db.max_queue_size |
Pool queue depth before incoming requests are rejected. Tune up if requests pile up behind slow queries. | 2048 |
No |
grpc.listen_address |
Container-internal bind address for the gRPC server. Do not change in the Compose stack. | 0.0.0.0:9090 |
No |
grpc.tls.mode |
TLS mode: insecure (cleartext to HAProxy), tls (server cert), or mtls (server + client cert). |
insecure |
No |
grpc.limits.max_receive_message_bytes |
Maximum inbound gRPC message size, in bytes. | 8388608 (8 MiB) |
No |
grpc.limits.max_send_message_bytes |
Maximum outbound gRPC message size, in bytes. | 8388608 (8 MiB) |
No |
grpc.timeouts.request_deadline_ms |
Per-RPC server-side deadline in milliseconds. After this the server cancels the in-flight request. | 30000 |
No |
Keys not listed above (additional libpq passthrough fields such as db.keepalives_idle, gRPC keepalive.* and thread_pool.* sub-blocks, advanced grpc.tls fields) accept their compiled-in defaults and can be left out of the file entirely unless you have a specific reason to override them.
Step 5 — Postgres init script: pg/init.d/01_create_dbs.sh¶
The Postgres image runs every executable file under /docker-entrypoint-initdb.d/ once, on the very first cluster boot. The compose stack bind-mounts ./pg/init.d to that path, so this script creates the two roles and two databases the application expects.
Paste into ~/reeflogic/pg/init.d/01_create_dbs.sh:
#!/usr/bin/env bash
# Postgres initialisation for the ReefLogic appliance.
#
# Creates two roles and two databases in the same cluster:
# - reeflogic (owner: reeflogic, with PostGIS)
# - keycloak (owner: keycloak, no PostGIS)
# Roles have no cross-grants; Keycloak cannot read tenant tables and
# vice versa. PostGIS is created with superuser privileges, then its
# objects are owned and used by the reeflogic role within the
# reeflogic database.
#
# Runs once on first cluster init (postgres entrypoint convention).
# Idempotent guards are present so manually re-running against an
# existing cluster is safe.
set -euo pipefail
: "${REEFLOGIC_DB_USER:?REEFLOGIC_DB_USER required in environment}"
: "${REEFLOGIC_DB_PASSWORD:?REEFLOGIC_DB_PASSWORD required in environment}"
: "${REEFLOGIC_DB_NAME:?REEFLOGIC_DB_NAME required in environment}"
: "${KEYCLOAK_DB_USER:?KEYCLOAK_DB_USER required in environment}"
: "${KEYCLOAK_DB_PASSWORD:?KEYCLOAK_DB_PASSWORD required in environment}"
: "${KEYCLOAK_DB_NAME:?KEYCLOAK_DB_NAME required in environment}"
psql --variable=ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname postgres <<SQL
DO \$\$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = '${REEFLOGIC_DB_USER}') THEN
EXECUTE format('CREATE ROLE %I LOGIN PASSWORD %L', '${REEFLOGIC_DB_USER}', '${REEFLOGIC_DB_PASSWORD}');
END IF;
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = '${KEYCLOAK_DB_USER}') THEN
EXECUTE format('CREATE ROLE %I LOGIN PASSWORD %L', '${KEYCLOAK_DB_USER}', '${KEYCLOAK_DB_PASSWORD}');
END IF;
END
\$\$;
SQL
if ! psql --username "${POSTGRES_USER}" --dbname postgres --tuples-only --no-align \
--command "SELECT 1 FROM pg_database WHERE datname = '${REEFLOGIC_DB_NAME}'" | grep -q 1; then
psql --variable=ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname postgres \
--command "CREATE DATABASE \"${REEFLOGIC_DB_NAME}\" OWNER \"${REEFLOGIC_DB_USER}\""
fi
if ! psql --username "${POSTGRES_USER}" --dbname postgres --tuples-only --no-align \
--command "SELECT 1 FROM pg_database WHERE datname = '${KEYCLOAK_DB_NAME}'" | grep -q 1; then
psql --variable=ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname postgres \
--command "CREATE DATABASE \"${KEYCLOAK_DB_NAME}\" OWNER \"${KEYCLOAK_DB_USER}\""
fi
psql --variable=ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${REEFLOGIC_DB_NAME}" <<SQL
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS postgis_topology;
GRANT USAGE ON SCHEMA public TO "${REEFLOGIC_DB_USER}";
GRANT ALL ON SCHEMA public TO "${REEFLOGIC_DB_USER}";
SQL
Make the script executable — the Postgres entrypoint runs it as a script, not as sourced shell, only when the file has the execute bit:
Step 6 — Configure the host HAProxy¶
Open your host HAProxy config (typically /etc/haproxy/haproxy.cfg). Two changes are needed in the existing frontend that serves your :443 traffic, plus two new backend blocks.
Frontend — add ALPN and route the two hostnames¶
On the bind *:443 ssl crt ... line, add the ALPN advertisement so gRPC can negotiate HTTP/2:
Inside the same frontend block, add two ACLs and the matching use_backend rules. Replace the two hostnames with the values you set in .env:
acl is_admin hdr(host) -i admin.example.com
acl is_keycloak hdr(host) -i auth.example.com
use_backend admin_backend if is_admin
use_backend keycloak_backend if is_keycloak
Backends — point at the two loopback ports¶
Add these two backend blocks at the bottom of the config:
backend admin_backend
mode http
server admin_local 127.0.0.1:9999 proto h2 check
backend keycloak_backend
mode http
option forwardfor
http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Host %[hdr(host)]
server kc_local 127.0.0.1:9092 check
Three details that matter:
alpn h2,http/1.1on the bind line — without it, gRPC clients cannot negotiate HTTP/2 and TLS handshake fails.proto h2on the admin backendserverline — without it, HAProxy speaks HTTP/1.1 to the admin server, which doesn't understand it.- The certificate at
/etc/haproxy/certs/your-cert.pemmust list both${ADMIN_DOMAIN}and${KEYCLOAK_DOMAIN}from.envas Subject Alternative Names. Add them to whatever pipeline produces the PEM (certbot --expand, manual issuance, corporate PKI) and redeploy the cert before bringing the stack up.
Validate and reload¶
sudo haproxy -c -f /etc/haproxy/haproxy.cfg # syntax check
sudo systemctl reload haproxy # hitless reload (SIGUSR2)
The reload drains existing connections gracefully and brings up workers on the new config — no downtime for unrelated services already running on the same HAProxy.
Step 7 — Bring the stack up¶
From the ~/reeflogic directory:
Watch the logs until you see, in order:
dbprintsrunning /docker-entrypoint-initdb.d/01_create_dbs.sh, thenCREATE ROLE,CREATE DATABASE,CREATE EXTENSION, and ultimatelydatabase system is ready to accept connections.admin-serverrunsFlyway Community EditionthenSuccessfully applied N migration(orSchema "public" is up to dateon subsequent starts), then the gRPC server starts listening on0.0.0.0:9090.keycloaklogsImported realm 'ReefLogic'and thenKeycloak <version> on JVM started in <seconds>.
If any of those steps loops or fails, see FAQ at the bottom for the common causes.
Step 8 — Verify¶
All three containers healthy:
Loopback backends answer directly on the host (before involving HAProxy):
# Keycloak realm discovery is the easiest end-to-end probe
curl --silent http://127.0.0.1:9092/realms/ReefLogic/.well-known/openid-configuration | head -c 200
# Admin gRPC HTTP/2 reachable (status code matters less than the absence of "connection refused")
curl --silent --output /dev/null --write-out 'admin: %{http_code}\n' http://127.0.0.1:9999/
Through the public hostname and the host HAProxy:
curl --silent "https://${KEYCLOAK_DOMAIN}/realms/ReefLogic/.well-known/openid-configuration" \
| python3 -c 'import json, sys; print(json.load(sys.stdin)["issuer"])'
# Expected output: https://auth.example.com/realms/ReefLogic
Finally, sign in to the realm with the bootstrap user you set in .env:
URL: https://${KEYCLOAK_DOMAIN}/realms/ReefLogic/account
Username: <REEFLOGIC_ADMIN_USERNAME from .env>
Password: <REEFLOGIC_ADMIN_PASSWORD from .env>
If the login page loads with the correct certificate, the realm is healthy and the host HAProxy is wired correctly.
Day-2 operations¶
Tail container logs:
The admin server also writes a structured rotating log inside its container:
Tail HAProxy on the host:
Restart the admin server (Flyway re-runs idempotently — already-applied migrations are skipped):
Roll out a new ReefLogic version:
- Bump
REEFLOGIC_VERSIONin.envto the new build version. docker compose pulldocker compose up -d
admin-server recreates with the new image; on startup it runs flyway migrate, picks up any new migrations baked into the image, then starts serving.
Ad-hoc database access:
docker compose exec db psql -U postgres -d reeflogic
docker compose exec db psql -U postgres -d keycloak
Manual database snapshot (the appliance does not bundle a scheduled backup runbook):
Re-importing the Keycloak realm¶
Keycloak's --import-realm flag is one-shot: it imports on first boot and silently skips on subsequent boots once it sees the realm in the database. To re-import after a Keycloak image update or a config change, drop the realm first:
docker compose exec keycloak /opt/keycloak/bin/kc.sh delete-realm --realm ReefLogic
docker compose restart keycloak
You will need to re-create any users, groups, and clients you added by hand after the initial bootstrap.
Stop or remove¶
Stop the stack but keep all data (database volume and admin server logs survive):
Stop and delete everything (database, Keycloak realm, admin server logs — irreversible):
Next steps¶
- Open the ReefLogic Administration Manual to operate the admin desktop client: connect, bootstrap the freshly deployed server, and manage tenants, roles, and grants.
- Continue with the ReefLogic User Manual for day-to-day data-side operational workflows.
- Consult the gRPC and Protobuf API Reference when integrating against the admin gRPC endpoint.
- Review the C++ API Reference when extending the admin server or the shared commons library.
FAQ¶
What is the minimum host the appliance needs?
A 64-bit Linux host with Docker Engine 24+, Docker Compose v2, roughly 4 GB RAM, 2 vCPU, and 20 GB disk. The host must also run HAProxy that already terminates TLS for your domain — the compose stack does not ship its own HAProxy and does not handle ACME or certificate renewal.
Why does the admin server publish on port 9999 instead of 9090?
The container listens on 9090 internally (as configured in reeflogic_admin.conf), but Compose maps it to 127.0.0.1:9999 on the host because port 9090 is commonly already in use. Keycloak follows the same pattern: container 8080 mapped to host 9092 to avoid colliding with other web services. Pick free host ports for your deployment and keep compose.yml and the HAProxy server lines in sync.
My HAProxy backend lines look correct but gRPC clients see TLS errors. What's wrong?
Three things to verify on the host HAProxy: the :443 bind line must carry alpn h2,http/1.1; the admin_backend server line must carry proto h2; and the SNI hostname the client sends must match ADMIN_DOMAIN exactly (HAProxy matching is case-insensitive but requires an exact hostname). Without any of these, the TLS handshake either fails or HAProxy silently degrades to HTTP/1.1 and the admin server rejects it.
The Postgres container exits with "PostgreSQL data in /var/lib/postgresql/data (unused mount/volume)". What happened?
Postgres 18 and newer refuse to start with a volume mounted directly at the legacy data path. Mount the named volume (or your host bind mount) one level up at /var/lib/postgresql so the image manages the version subdirectory itself, as shown in the compose.yml in Step 2. If you already have a half-initialised cluster from a previous attempt, stop the stack, run docker compose down --volumes, and bring the stack back up to bootstrap a fresh PG18 layout.
Admin-server keeps restarting with "password authentication failed for user reeflogic". Why?
Two common causes. First, db.password in reeflogic_admin.conf doesn't match REEFLOGIC_DB_PASSWORD in .env byte-for-byte. Second, the Postgres init script did not run because the pg/init.d directory was missing on the host: Docker silently creates an empty mount source, and the Postgres entrypoint logs ignoring /docker-entrypoint-initdb.d/* in its logs. Confirm the application roles exist with docker compose exec db psql -U postgres -c "\du".
How do I roll out a new ReefLogic version safely?
Bump REEFLOGIC_VERSION in .env to the new build version, run docker compose pull, then docker compose up -d. The admin server re-runs Flyway migrate on every start, and migration failures abort the container — surfaced as a restart loop — so the schema can never silently diverge from the binary. Take a database snapshot beforehand if the release notes flag a destructive migration.