DNS in the homelab is handled by four components providing ad-blocking, DoT/DoH, recursive resolution, in-cluster DNS, and automatic record management.
Technitium is deployed as raw Kubernetes manifests (no Helm chart) in the technitium namespace at sync wave 6.
Primary (technitium-primary):
| Setting | Value |
|---|---|
| Image | technitium/dns-server:14 |
| Namespace | technitium |
| Sync Wave | 6 |
| Admin Password | Vault: kv/technitium/admin |
| Service | LoadBalancer at 192.168.88.11 (externalTrafficPolicy: Local) |
| Ingress | https://dns.homelab.vyanh.uk (DoH on port 8443, web UI on port 5380) |
| Storage | longhorn 2Gi PVC |
| Resources | Deployed via raw manifests: primary-deployment.yaml, primary-pvc.yaml, primary-service-dns.yaml, primary-service-web.yaml |
Secondary (technitium-secondary):
| Setting | Value |
|---|---|
| Image | technitium/dns-server:14 |
| Service | LoadBalancer at 192.168.88.13 |
| Storage | longhorn 2Gi PVC |
| Anti-affinity | Runs on different node from primary |
| Ingress | https://dns2.homelab.vyanh.uk |
MikroTik Instance (third instance):
| Setting | Value |
|---|---|
| Location | MikroTik container (app-technitium) on virtual IP 192.168.88.53 |
| Web UI | http://dns-mikrotik.homelab.vyanh.uk |
| Zones | homelab.vyanh.uk + vyanh.uk as Secondary AXFR from 192.168.88.11 |
| Forwarder | Cloudflare DoH HttpsJson |
A configure-job.yaml ArgoCD PostSync hook (BeforeHookCreation) configures both K8s instances via the Technitium API:
dnsServerDomain per-instance (dns.homelab.vyanh.uk / dns2.homelab.vyanh.uk)forwarderProtocol=Https, concurrentForwarding=truednssecValidation=true)AllowOnlyForPrivateNetworkstechnitium-tls secret)enableDnsOverHttp+X-Forwarded-Proto does NOT work)qnameMinimization=true, randomizeName=true, eDnsClientSubnet=falsecacheMaximumRecordTtl=86400, cacheMinimumRecordTtl=60, serveStaleTtl=86400, prefetch enabled| Source | URL |
|---|---|
| StevenBlack/hosts | https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts |
| AdGuard DNS filter | https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt |
| URLhaus | https://urlhaus.abuse.ch/downloads/hostfile/ |
| HaGeZi Light | https://cdn.jsdelivr.net/gh/hagezi/dns-blocklists@latest/adblock/light.txt |
Important: Each blocklist URL must be a separate
--data-urlencodeparam in the configure-job curl call. A single combined string is silently ignored by the Technitium API.
Installed from the Technitium app store (non-fatal if unavailable):
Technitium primary hosts the homelab.vyanh.uk authoritative zone. ExternalDNS manages records in this zone via rfc2136+TSIG. The secondary and MikroTik instances pull this zone via AXFR.
Public *.vyanh.uk hostnames exposed via Cloudflare Tunnel are also overridden locally so LAN clients reach Traefik directly — bypassing Cloudflare entirely.
The configure-job.yaml PostSync hook runs an ensure_split_dns function on every ArgoCD sync. For each public hostname it:
192.168.88.12 (Traefik LB), TTL 300This runs against all three Technitium instances (primary, secondary, MikroTik).
| Zone | A Record | Backend via Traefik |
|---|---|---|
vault.vyanh.uk |
192.168.88.12 |
Vaultwarden :8843 on NAS |
wiki.vyanh.uk |
192.168.88.12 |
WikiJS :3080 on NAS |
status.vyanh.uk |
192.168.88.12 |
Uptime Kuma :3001 on NAS |
tracker.vyanh.uk |
192.168.88.12 |
LifeOps in K8s |
ntfy.vyanh.uk |
192.168.88.12 |
ntfy :2586 on NAS |
Each Traefik Ingress serves both the *.homelab.vyanh.uk and *.vyanh.uk hostnames with separate TLS certificates issued via the letsencrypt-dns01 ClusterIssuer.
| Feature | Test | Result |
|---|---|---|
| DoT | openssl s_client -connect 192.168.88.11:853 |
LE cert chain ✓ |
| DoH | kdig +https @dns.homelab.vyanh.uk cloudflare.com |
HTTP/2-POST status 200 ✓ |
| Blocking | dig doubleclick.net @192.168.88.11 |
NXDOMAIN + EDE "Blocked" ✓ |
| DNSSEC | dig +dnssec google.com @192.168.88.11 |
flags: qr rd ra ad ✓ |
Technitium v14 has no Prometheus /metrics endpoint — the built-in web dashboard at dns.homelab.vyanh.uk covers stats, query logs, blocklist hits, and top clients. No Grafana dashboard or vmagent scrape job is needed.
| Setting | Value |
|---|---|
| Sync Wave | -10 (deploys first, before everything) |
| Namespace | kube-system |
| Type | ConfigMap (custom Corefile) managed by ArgoCD coredns-custom app |
CoreDNS uses split-horizon DNS with a dedicated zone block for homelab.vyanh.uk. This means:
*.homelab.vyanh.uk A queries return the Traefik ClusterIP directly (see hairpin NAT note below)cluster.local is handled natively by the kubernetes plugin as alwaysPod DNS query
│
▼
CoreDNS (10.96.0.10)
│
├─ *.cluster.local ──→ kubernetes plugin (in-cluster service discovery)
│
├─ *.homelab.vyanh.uk ──→ template plugin → 10.99.138.53 (Traefik ClusterIP)
│
└─ everything else ──→ Technitium primary → Technitium secondary
→ 1.1.1.1 → 8.8.8.8
An earlier version forwarded homelab.vyanh.uk queries to Technitium, which is authoritative and returns 192.168.88.12 (the MetalLB LoadBalancer IP). This broke all in-cluster SSO:
Pod → resolves authentik.homelab.vyanh.uk → 192.168.88.12
Pod → connect 192.168.88.12:443 → i/o timeout
Why the timeout? MetalLB L2 mode announces 192.168.88.12 on the physical LAN via ARP. kube-proxy sets up DNAT for external traffic arriving at the node, but does not DNAT pod-originated traffic destined for the LB IP — so packets from pods hit the node's physical interface and go nowhere.
Fix: The homelab.vyanh.uk:53 block uses a template plugin to return the Traefik ClusterIP (10.99.138.53) for all A queries. ClusterIPs are always reachable from any pod on any node. Since every *.homelab.vyanh.uk service is a Traefik ingress, the ClusterIP is always the correct in-cluster destination.
The ClusterIP is stable — it only changes if the Traefik Service is deleted and recreated (which ArgoCD would handle).
# All *.homelab.vyanh.uk services are fronted by Traefik.
# Return the Traefik ClusterIP (10.99.138.53) for all A queries from pods.
# Pods cannot reach the MetalLB LoadBalancer IP (192.168.88.12) due to hairpin NAT:
# the external LB IP is announced via ARP on the physical network but kube-proxy
# does not DNAT pod→LB traffic back into the cluster, causing i/o timeout.
homelab.vyanh.uk:53 {
template IN A homelab.vyanh.uk {
answer "{{ .Name }} 60 IN A 10.99.138.53"
}
template IN AAAA homelab.vyanh.uk {
rcode NOERROR
}
cache 60
errors
log . {
class error
}
}
.:53 {
errors
health { lameduck 5s }
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
# Suppress AAAA lookups — cluster has no IPv6 pod routing
template IN AAAA { rcode NOERROR }
# Suppress gRPC service-config TXT lookups (_grpc_config.*)
template IN TXT {
match ^_grpc_config\.
rcode NXDOMAIN
}
# Route public queries through Technitium (blocklists, DNSSEC, cache)
# then fall back to public DNS if Technitium is unreachable
forward . 192.168.88.11 192.168.88.13 1.1.1.1 8.8.8.8 {
max_concurrent 1000
health_check 5s
}
cache 300 {
disable success cluster.local
disable denial cluster.local
}
loop
reload
loadbalance
}
hosts {} Block?An earlier version hardcoded 5 *.homelab.vyanh.uk hostnames in a hosts {} block pointing to the Traefik ClusterIP. This had two problems:
nextcloud.homelab.vyanh.uk, vault.homelab.vyanh.uk, wikijs.homelab.vyanh.uk, etc. got NXDOMAINhosts {} does not support *.homelab.vyanh.ukThe template plugin fixes both: it matches all A queries for any subdomain of homelab.vyanh.uk and always returns the correct ClusterIP. New services added via ExternalDNS work automatically.
During initial cluster bootstrap, CoreDNS starts at wave -10, well before Technitium (wave 6) and MetalLB (wave 1). The template plugin has no external dependency — it answers entirely from config. DNS for homelab.vyanh.uk is available immediately on startup.
K8s nodes use systemd-resolved for system-level DNS (used by kubelet and containerd for image pulls). This is separate from CoreDNS (which pods use).
Problem discovered 2026-03-09: All 4 nodes had 8.8.8.8/1.1.1.1 hardcoded in their cloud-init netplan config. Containerd cannot resolve harbor.homelab.vyanh.uk via public DNS → image pulls from Harbor fail silently for any image not already cached on that node.
Fix: All nodes updated to use 192.168.88.1 (MikroTik) as primary system DNS, 8.8.8.8 as fallback. MikroTik is configured to forward to all three Technitium instances.
Config location on each node: /etc/netplan/50-cloud-init.yaml
nameservers:
addresses:
- 192.168.88.1 # MikroTik → Technitium (resolves homelab.vyanh.uk)
- 8.8.8.8 # Fallback for public DNS during MikroTik outage
Nodes fixed: k8s-node1 (192.168.88.249), k8s-node2 (192.168.88.248), k8s-node3 (192.168.88.247), k8s-controlplane
Verify on any node:
ssh n1 "resolvectl query harbor.homelab.vyanh.uk"
# Expected: harbor.homelab.vyanh.uk: 192.168.88.12
Warning:
netplan applyon these nodes requires a PTY (password prompt blocks non-interactive sudo). Usekubectl runwith a privileged nsenter pod to apply changes remotely without SSH password.
| Setting | Value |
|---|---|
| Chart | external-dns v1.19.0 |
| Namespace | external-dns |
| Sync Wave | 7 (after Technitium) |
| Provider | rfc2136 (TSIG HMAC-SHA256) |
| Policy | upsert-only (never deletes records) |
| Sources | ingress, service |
| Domain Filter | homelab.vyanh.uk |
| rfc2136 Host | 192.168.88.11, port 53 |
| Zone | homelab.vyanh.uk |
| TSIG Key | external-dns-key, alg: hmac-sha256 |
| Secret | Namespace | Vault Path | Notes |
|---|---|---|---|
technitium-tsig |
external-dns |
kv/external-dns/tsig |
TSIG key for rfc2136 updates |
Generate TSIG secret: openssl rand -base64 32
ExternalDNS watches Kubernetes Ingress and Service resources. When a new Ingress is created with a hostname matching homelab.vyanh.uk, ExternalDNS creates the corresponding A record in Technitium via rfc2136 (dynamic DNS update with TSIG authentication), pointing to the Traefik VIP.