The following services run as Docker Compose stacks on the Synology DS920+ NAS (192.168.88.19). All compose files are located at /volume1/docker/<app>/ and managed by Portainer Git stacks — each stack auto-pulls from the nas-stacks GitHub repo every 5 minutes. Secrets are injected via Portainer's stack environment variables UI (not stored in Git).
| App | Port | Compose Directory | External Access |
|---|---|---|---|
| Immich | 2283 | /volume1/docker/immich/ |
LAN only |
| Vaultwarden | 8843 | /volume1/docker/vaultwarden/ |
https://vault.vyanh.uk (via MikroTik cloudflared tunnel) |
| Uptime Kuma | 3001 | /volume1/docker/uptime-kuma/ |
https://status.vyanh.uk (via K8s cloudflared tunnel) |
| Portainer | 9443 | /volume1/docker/portainer/ |
LAN only (manually deployed, not a Git stack) |
| Watchtower | 8080 | /volume1/docker/portainer/ |
Monitor-only mode |
| Homepage | 3000 | homepage/ in nas-stacks repo |
LAN only |
| Paperless-ngx | 8010 | /volume1/docker/paperless/ |
LAN only |
| Node Exporter | 9100 | /volume1/docker/node-exporter/ |
Scraped by K8s vmagent |
| SNMP Exporter | 9116 | /volume1/docker/snmp-exporter/ |
Scraped by K8s vmagent |
| UPS API | host | /volume1/docker/ups-api/ |
LAN only, host network |
| Peanut | 8080 | /volume1/docker/peanut/ |
LAN only, NUT web dashboard |
| Syncthing | 8384 | syncthing/ in nas-stacks repo |
LAN only — receives Proxmox backups from K8s |
| Syncthing Pruner | — | syncthing/ (same stack) |
Sidecar — deletes files >14 days from dump dir |
| ntfy | 2586 | /volume1/docker/ntfy/ |
https://ntfy.vyanh.uk and https://ntfy.homelab.vyanh.uk |
| IT Tools | 8765 | /volume1/docker/it-tools/ |
LAN only |
| vault-watcher | — | /volume1/docker/vault-watcher/ |
LAN only — Vault seal monitoring |
| MinIO | 9000/9001 | /volume1/docker/minio/ |
LAN only — S3-compatible object storage |
| WikiJS | 3080 | /volume1/docker/wikijs/ |
LAN only — this wiki |
| Memos | 5230 | /volume1/docker/memos/ |
LAN only — notes |
| GitLab CE | 3400 (web), 2222 (SSH) | /volume1/docker/gitlab/ |
LAN only — Git server (primary, GitHub mirror) |
Self-hosted photo and video management. Uses Intel Quick Sync (J4125) for hardware-accelerated video transcoding. Database uses pgvecto-rs for vector similarity search (face detection, CLIP).
nas-ingress — see Service HA192.168.88.19:5434 (non-standard port) for K8s standby access192.168.88.19:6379 for K8s standby access/volume1/photos_immich) exported to K8s subnet 192.168.88.0/24Synology gotcha: Containers on
internal: trueDocker networks don't get host port-publish on Synology DSM. Theredisanddatabaseservices are also connected toapp-net(non-internal) so their host port bindings activate correctly.
Lightweight Bitwarden-compatible password manager. Domain: https://vault.vyanh.uk.
192.168.88.19:8843)[email protected], port 587 STARTTLS)vault.vyanh.uk — the tunnel runs on the MikroTik router, not in this compose stack/volume1/docker/vaultwarden/datanas-ingress — see Service HASecurity hardening (2026-03-14):
no-new-privileges: true, mem_limit: 512m / mem_reservation: 128mIP_HEADER=CF-Connecting-IP for correct client IP logging behind Cloudflare tunnelm=65536, t=3, p=4) stored in /volume1/docker/vaultwarden/admin-token.env — NOT in Portainer env vars (the $argon2id$ dollar signs would be interpolated by compose). Also mirrored at /volume1/docker/portainer/data/vaultwarden-admin-token.env so Portainer API redeploys can find it.curl http://localhost/alive every 30sBackup (2026-03-14):
Two-layer backup:
db.sqlite3 to MinIO db-backups/vaultwarden/litestream/ in real-time. Config at /volume1/docker/vaultwarden/litestream.yml.db.sqlite3.bak to MinIO db-backups/vaultwarden/YYYY-MM-DD.sqlite3 hourly. 30-day lifecycle rule.Portainer env vars (stack 9):
| Variable | Value |
|---|---|
SMTP_PASSWORD |
Gmail app password |
MINIO_ACCESS_KEY |
db-backup |
MINIO_SECRET_KEY |
MinIO secret for db-backup user |
Note:
ADMIN_TOKENis NOT a Portainer env var. It is sourced fromadmin-token.envvia composeenv_file:to avoid Argon2id$sign interpolation.
Vault reference: kv/vaultwarden/admin → password (plain text, for reference), token_hash (Argon2id hash, matches admin-token.env)
Infrastructure monitoring with 22 monitors organized into 3 groups:
| Group | Monitors |
|---|---|
| Infrastructure | K8s node pings, router, core infrastructure |
| NAS Apps | All Docker apps on the NAS |
| Kubernetes Services | All K8s services via Traefik IP |
K8s monitoring trick: Since Uptime Kuma runs in Docker bridge network (can't resolve K8s internal DNS), monitors point to Traefik IP 192.168.88.12 directly with Host header. Settings: maxredirects=0, accept 200-399.
Alerts: ntfy push notifications linked to all monitors. Server: https://ntfy.homelab.vyanh.uk, topic: homelab-alerts, priority: high.
Database: SQLite at /volume1/docker/uptime-kuma/data/kuma.db (owned by root). To edit:
# Stop container first, then:
docker run --rm --user root \
-v /volume1/docker/uptime-kuma/data:/data \
keinos/sqlite3 sqlite3 /data/kuma.db
Document management system with OCR capabilities. Uses Gotenberg for PDF processing and Tika for content extraction. Default admin credentials: admin/changeme.
Customizable dashboard. Requires HOMEPAGE_ALLOWED_HOSTS environment variable for the host header allowlist.
Config files (services.yaml, settings.yaml, widgets.yaml, etc.) are tracked in the nas-stacks repo under homepage/config/ and mounted directly into the container from the Portainer Git clone path. To update the dashboard, edit the config in the repo, push, then Pull and redeploy in Portainer.
All 9 Homepage skeleton files must be present in the repo — the container cannot write to the config dir at runtime. See the Homepage Dashboard wiki page for full details.
Container update monitor running in monitor-only mode — it checks for updates and sends notifications but does NOT automatically update containers.
Custom Python application that monitors the UPS (Uninterruptible Power Supply) via NUT (Network UPS Tools) and sends Telegram alerts on power events (on battery, low battery, etc.).
network_mode: host to access the NUT daemon at 127.0.0.1:3493TELEGRAM_TOKEN, TELEGRAM_CHAT_ID injected via Portainer stack env varsWeb UI for monitoring UPS status via NUT. Provides a visual dashboard for battery level, load, runtime, and UPS events.
| Setting | Value |
|---|---|
| Image | brandawg93/peanut:latest |
| Port | 8080 (host network) |
| NUT Host | 127.0.0.1:3493 (local NUT daemon) |
Peanut runs alongside UPS API on host network mode so both can connect to the NUT daemon on 127.0.0.1. Access via http://192.168.88.19:8080 on the LAN.
Receives Proxmox VM backup files synced from the K8s Syncthing pod (which reads from TrueNAS NFS). Acts as the long-term archive for VM backups.
| Setting | Value |
|---|---|
| Image | syncthing/syncthing:latest |
| GUI port | 8384 → http://192.168.88.19:8384 |
| Sync port | 22000 TCP+UDP |
| Config | /volume1/docker/syncthing/config (bind mount, owned by 1026:100) |
| Data | /volume1/docker/proxmox-backup/dump (bind mount) |
| Folder mode | Receive Only + Ignore Deletes |
| PUID/PGID | 1026 / 100 (andy user) |
Ignore Deletes is critical: when Proxmox prunes the TrueNAS staging copy (keep-last=1), the K8s Syncthing stops announcing those files. Without Ignore Deletes, Synology would delete them too — defeating the archive purpose.
Why bind mount for config (not named volume)? Synology ACLs (+ on BtrFS dirs) make named Docker volumes appear read-only inside containers. The config dir must be pre-created with correct ownership before first start:
mkdir -p /volume1/docker/syncthing/config
chown 1026:100 /volume1/docker/syncthing/config
Device IDs (for re-pairing if config is lost):
SKNA3BI-CNUWXRY-OX22RNI-RVAXFBH-VMO27VE-UTBT5X7-RK5KZA4-AIFIJQZYH2QVA4-XQ2SXXC-IEL2BXQ-GXF3OAM-HQJ42NK-KLPPXQN-DYRVUJB-SRCJVQLDevice IDs only change if the config directory is deleted. Both sides persist config on disk (K8s on Longhorn PVC, Synology on bind mount) — IDs survive restarts and reboots.
Alpine container in the same syncthing stack that runs crond to delete old backup files from the Synology dump directory.
| Setting | Value |
|---|---|
| Image | alpine:latest |
| Schedule | 30 6 * * * (06:30 daily) |
| Deletes | *.vma.zst, *.vma.zst.notes, *.log, *.tar older than 14 days |
| Directory | /volume1/docker/proxmox-backup/dump |
Why a sidecar instead of DSM Task Scheduler? DSM 2FA blocks the API, and there is no crontab binary on the NAS for non-root users. A Docker sidecar requires no root access and is self-contained in the Git stack.
Self-hosted push notification server. All homelab alerting channels route through ntfy.
| Setting | Value |
|---|---|
| Port | 2586 |
| Public URL | https://ntfy.vyanh.uk |
| LAN URL | https://ntfy.homelab.vyanh.uk |
| Auth | deny-all — Bearer token authentication required |
| FCM relay | Upstream https://ntfy.sh for mobile push (Firebase Cloud Messaging) |
| Admin user | andy |
Topics:
| Topic | Used By | Priority |
|---|---|---|
homelab-alerts |
vmalert/alertmanager (K8s), Uptime Kuma | high/critical |
homelab-ops |
MikroTik backup scripts, general operations | low/default |
homelab-security |
CrowdSec security events | high |
homelab-ups |
UPS power events (UPS API) | high |
Subscribe: Log in at https://ntfy.homelab.vyanh.uk with andy credentials, then subscribe to topics. Mobile: install ntfy app → subscribe to ntfy.vyanh.uk/<topic> with Bearer token.
Secrets: NTFY_TOKEN injected via Portainer env vars (not in Git).
Collection of browser-based developer utilities (encoding, crypto, network tools, etc.).
| Setting | Value |
|---|---|
| Image | corentinth/it-tools:latest |
| Port | 8765 |
| URL | http://192.168.88.19:8765 |
| Data | Stateless — no persistent volume needed |
Alpine container that monitors Vault seal state and K8s API reachability. Sends ntfy alert if Vault becomes sealed or unreachable.
| Setting | Value |
|---|---|
| Image | alpine:3.21 |
| Script | /volume1/docker/vault-watcher/scripts/watch.sh (bind mounted) |
| State | Named Docker volume vault-watcher-state |
| Check interval | 30 seconds |
| Vault transit | http://192.168.88.19:8201 |
| Vault K8s | https://vault.homelab.vyanh.uk |
| Alerts | ntfy via NTFY_URL/NTFY_TOKEN env vars |
| Memory limit | 64m |
S3-compatible object storage. Serves as the backend for all K8s backup tools (Velero, Longhorn, pg_dump, Tempo traces).
| Setting | Value |
|---|---|
| API Port | 9000 |
| Console Port | 9001 |
| Network mode | host (required for passive FTP on port 2121) |
| Data | /volume1/docker/minio/data |
| FTP | Port 2121, passive range 30000-30010 (MikroTik router backup uploads) |
Buckets: velero-backups, longhorn-backups, db-backups, tempo-traces, harbor-registry, router-backups.
See Backup Strategy for full bucket and user details.