1.0.0 Release IaaS

This commit is contained in:
2026-03-15 04:41:02 +09:00
commit a7365da431
292 changed files with 36059 additions and 0 deletions

View File

@@ -0,0 +1,148 @@
Tags: #os, #configuration, #network, #virtualization, #authorization, #authentication
## Preparation
### Set DHCP reservation and DNS record
#### Set DHCP reservation on KEA DHCP in OPNsense
Following [here](05_07_opnsense_kea.md)
- Services:Kea DHCP:Kea DHCPv4:Reservations - \[+\]
- Subnet: 192.168.10.0/24
- IP address: 192.168.10.12
- MAC address: 0A:49:6E:4D:02:00
- Hostname: auth
- Description: auth
- `save`
#### Set DNS records in BIND
Following [here](../06_network/06_03_net_bind.md).
- net server
- file:
- ~/data/containers/bind/lib/db.ilnmors.internal
- ~/data/containers/bind/lib/db.10.168.192.in-addr.arpa
```ini
# db.ilnmors.internal
# ...
auth IN A 192.168.10.12
# ...
# db.10.168.192.in-addr.arpa
# ...
12 IN PTR auth.ilnmors.internal.
# ...
```
```bash
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
systemctl --user restart bind
```
### Create VM template
- ~/data/config/scripts/auth.sh
```bash
virt-install \
--boot uefi \
--name auth \
--os-variant debian13 \
--vcpus 2 \
--memory 4096 \
--location /var/lib/libvirt/images/debian-13.0.0-amd64-netinst.iso \ # For serial installing, use `--location` instead of `--cdrom`
--disk pool=vm-images,size=66,format=qcow2,discard=unmap \
--network network=ovs-lan-net,portgroup=vlan10-access,model=virtio,mac=0A:49:6E:4D:02:00 \ # Use designated ovs port group
--graphics none \
--console pty,target_type=serial \
--extra-args "console=ttyS0,115200"
# After enter this command, then the console start automatically
# Remove all annotation before you make the sh file.
```
### Debian installing
- Following [here](../03_common/03_01_debian_configuration.md) to install Debian.
- Debian installer supports serial mode regardless getty@ttyS0 service is enabled or not.
- Following [here](../03_common/03_02_iptables.md) to set iptables.
- Following [here](../03_common/03_04_crowdsec.md) to set CrowdSec
#### Serial console setting
After installation, use `ctrl + ]` to exit console. Before setting getty@ttyS0, you can't use serial console to access VM. Therefore, use IP address set on installation, and connect net server via ssh first, following the step to enable the getty.
### Modify VM template settings
After getty setting, shutdown auth vm with `shutdown` in VM or `sudo virsh shutdown auth` in hypervisor to turn off vm first.
```bash
virsh edit auth
```
```xml
<!-- auth -->
...
</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<!-- cpu priority - 1024: default/2048: high/512: low -->
<!--
<disk type='file' device='cdrom'>
...
</disk>
# Remove booting disk
-->
```
```bash
virsh dumpxml auth > ~/data/config/vms/dumps/auth.xml
virsh start auth && virsh console auth
# Start auth server with console
```
### Common setting
- auth.service
```ini
# ~/data/config/services/auth.service
# ~/.config/systemd/user/auth.service
[Unit]
Description=auth Auto Booting
After=network-online.target
Wants=network-online.target
Requires=opnsense.service
[Service]
Type=oneshot
# Maintain status as active
RemainAfterExit=yes
# CrowdSec should be set
ExecStartPre=%h/data/config/scripts/wait-for-it.sh 192.168.10.1:8080 -t 0
ExecStartPre=%h/data/config/scripts/wait-for-it.sh 192.168.10.11:53 -t 0
ExecStartPre=/bin/bash -c "sleep 15"
# Run the service
ExecStart=/usr/bin/virsh -c qemu:///system start auth
# Stop the service
ExecStop=/usr/bin/virsh -c qemu:///system shutdown auth
[Install]
WantedBy=default.target
```
```bash
ln -s ~/data/config/services/auth.service ~/.config/systemd/user/auth.service
systemctl --user daemon-reload
systemctl --user enable auth.service
systemctl --user start auth.service
```

View File

@@ -0,0 +1,320 @@
Tags: #os, #configuration, #network, #virtualization, #container, #security
## Step-CA
Step-CA is the modern CA server which can operate in private network environment. It can issue CA and certificates, apply the policy with provisioner. It supports ACME, JWK, etc.
### Secret management
- File:
- ~/data/config/secrets/.secret.yaml
- Edit `.secret.yaml` with `edit_secret.sh`
```yaml
# ~/data/config/secrets/.secret.yaml
# STEP_CA
STEP_CA_PASSWORD: generated_value
```
### Preparation
#### iptables and firewall rules
- Set iptables first, following [here](../03_common/03_02_iptables.md).
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
#### Create directory for container
```bash
mkdir -p ~/data/containers/step-ca
chmod 700 ~/data/containers/step-ca
setfacl -m d:g::0 ~/data/containers/step-ca
setfacl -m d:o::0 ~/data/containers/step-ca
setfacl -m u:auth:rwx ~/data/containers/step-ca
setfacl -m u:100999:rwx ~/data/containers/step-ca
setfacl -d -m u:auth:rwx ~/data/containers/step-ca
setfacl -d -m u:100999:rwx ~/data/containers/step-ca
# After generating
sudo find ~/data/containers/step-ca -type f -exec setfacl -m m::rw {} \;
sudo find ~/data/containers/step-ca -type d -exec setfacl -m m::rwx {} \;
```
> Step-CA container executes as 1000:1000(step:step) permission in container. It is mapped host's 100999. Therefore, directories have to have ACL via `setfacl`
#### Add new domain in BIND
Following [here](../06_network/06_03_net_bind.md).
- net server
- file: /home/net/data/containers/bind/lib/db.ilnmors.internal
```ini
# ...
step-ca IN CNAME auth.ilnmors.internal.
# ...
```
```bash
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
systemctl --user restart bind
```
### Podman Image
```bash
podman pull smallstep/step-ca:0.28.4 # Do not use latest version to management
```
#### CA generation
```bash
podman run --rm -it \
-v /home/auth/data/containers/step-ca:/home/step:rw \
smallstep/step-ca:0.28.4 step ca init \
--deployment-type standalone \
--name ilnmors.internal \
--dns step-ca.ilnmors.internal \
--address :9000 \
--provisioner step-admin@ilnmors.internal
# Private mode: standalone
# Intermediate CA setting options
# --ra stepCAS \
# --issuer https://step-ca.ilnmors.internal:9000 \
# --issuer-fingerprint ~~~~ \
# --issuer-provisioner jwk-ca@dev.ilnmors.internal
# --confidential-file ~~~
> [leave empty and we\'ll generate one]: [blank]
# Print
---
✔ Password: Generated_value # Copy this value and paste in .secret.yaml file as STEP_CA_PASSWORD=
✔ Root fingerprint: fingerprint
---
```
> Password value encrypts root CA's private key
```bash
# Podman secret
extract_secret.sh ~/data/config/secrets/.secret.yaml -f "STEP_CA_PASSWORD" | podman secret create "STEP_CA_PASSWORD" -
```
### Configuration files
- File: ~/data/containers/step-ca/config/ca.json
#### Provisioner
Provisioner is basically the object of issuing certificates as a RA. They verify CSR from client and when it is valid with its policy they will sign the certificates with CA's private key. Step-CA supports various type of provisioner. In this homelab, only ACME will be used. Because it is easy to manage when you use OPNsense ACME client. Step-CA supports one root CA and one intermediate CA in one container, only one intermediate CA will be operated in this project. However, the way to set multi intermediate CA will be explained, and jwk way in this document.
##### jwk-ca@ilnmors.internal
This provisioner is to issue intermediate CA. It wouldn't be used in this project. The option for CA in X.509 format is optional and defined in as extension option. To define these option in step-ca, the template file is needed.
- file: ~/data/containers/step-ca/templates/ca.tpl
```json
{
"subject": {{ toJson .Subject }},
"keyUsage": ["certSign", "crlSign"],
"basicConstraints": {
"isCA": true,
"maxPathLen": 0
}
}
```
> keyUsage: Designate to manage certificates and CRL
> isCA: Designate the certificate to use CA
> maxPathLen: Designate allowed below CA's number
- Define provisioner
```bash
podman exec -it step-ca \
step ca provisioner add jwk-ca@ilnmors.internal \
--create \ # Generate key pair automatically
--type JWK \
--ca-config /home/step/config/ca.json \ # Sign on certificate with root CA's private key
--x509-template /home/step/template/ca.tpl \ # Use x509 template
--x509-max-dur 87600h \ #
--x509-default-dur 87600h
```
##### jwk@ilnmors.internal
This provisioner is to issue the certificates like DB communication based on its identity (Using JWK and JWT pre-shared). The certificate is issued based on enrolled key in provisioner. However, in this project all crt will be used central ACME client `Opnsens ACME client` and `Caddy`.
- Define provisioner
```bash
podman exec -it step-ca \
step ca provisioner add jwk-crt@ilnmors.internal \
--create \ # Generate key pair automatically
--type JWK \
--x509-default-dur 2160h # To set default expire date as 90 days.
```
##### acme@ilnmors.internal
This provisioner is to issue the certificates for https communication. The certificate is issued based on challenge; the ownership of domain.
- Define provisioner
```bash
podman exec -it step-ca \
step ca provisioner add acme@ilnmors.internal \
--type ACME \
--x509-default-dur 2160h # To set default expire date as 90 days.
```
#### Subject
Step-CA uses subject as a account. It is used to manage Step-CA remotely. To use this, it is necessary to use `--remote-management` option when the step-CA is initially set or fix `ca.json` authority.enableAdmin:true. When subject is enabled, provisioners aren't defined in ca.json but its own DB.
#### Policy
Self-hosted Step-CA server doesn't support to give x509 policy for each provisioner. It only allows public policy. Only `ilnmors.internal` and `*.ilnmors.internal` certificates are required, so designate the policy in `ca.json`
> Policies can be administered using the step CLI application. The commands are part of the step ca policy namespace. In a self-hosted step-ca, policies can be configured on the authority level. Source: [here](https://smallstep.com/docs/step-ca/policies/)
- file: ~/data/containers/step-ca/config/ca.json
```json
...
"authority": {
"policy": {
"x509": {
"allow": {
"dns": [
"ilnmors.internal",
"*.ilnmors.internal"
]
},
"allowWildcardNames": true
}
},
"provisioners": [ ... ]
....
}
...
```
### Quadlet
- File:
- ~/data/config/containers/step-ca/step-ca.container
```ini
# ~/data/config/containers/step-ca/step-ca.container
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Step-CA
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/smallstep/step-ca:0.28.4
ContainerName=step-ca
PublishPort=9000:9000/tcp
Volume=%h/data/containers/step-ca:/home/step:rw
Environment="TZ=Asia/Seoul"
Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD"
Secret=STEP_CA_PASSWORD,target=/run/secrets/STEP_CA_PASSWORD
Label=diun.enable=true
Label=diun.watch_repo=true
[Install]
WantedBy=default.target
```
#### Create systemd `.service` file
```bash
mkdir -p ~/.config/containers/systemd
ln -s ~/data/config/containers/step-ca/step-ca.container ~/.config/containers/systemd/step-ca.container
systemctl --user daemon-reload
```
#### Enable and start service
```bash
systemctl --user start step-ca.service
```
### Verify server
#### Server health check
```bash
curl -k https://step-ca.ilnmors.internal:9000/health
> {"status":"ok"}
```
#### Server policy check
```bash
podman exec -it step-ca step ca certificate test.com test.crt test_key --provisioner acme@ilnmors.internal
> error creating new ACME order: The server will not issue certificates for the identifier
```
---
### Set trust Root CRT
#### Linux
##### Debian/ubuntu
- File: /usr/local/share/ca-certificates/{ca.crt, ca.pem}
- `update-ca-certificates`
##### Cent/RHEL/Fedora
- File: /etc/pki/ca-trust/source/anchors/{ca.crt, ca.pem}
- `update-ca-trust`
#### Windows
- `Windows + R` + `certlm.msc`
- `All Task` - `Import`
#### Firefox
- Setting - Security - certificates - CA - add
---
### intermediate CA setting
It won't be used in this project, however here is the way to set.
#### Example of dev.ilnmors.internal intermediate CA
- init CA
```bash
podman run --rm -it \
-v /home/dev/data/containers/step-ca:/home/step:rw \
smallstep/step-ca:0.28.4 step ca init \
--deployment-type standalone \
--name dev.ilnmors.internal \
--dns step-ca.dev.ilnmors.internal \
--address :9000 \
--provisioner admin@dev.ilnmors.internal \
--ra stepCAS \
--issuer https://step-ca.ilnmors.internal:9000 \
--issuer-fingerprint ~~~~ \ # root CA's fingerprint
--issuer-provisioner jwk-ca@dev.ilnmors.internal
--confidential-file ~~~ # jwk-ca's password file
```

View File

@@ -0,0 +1,231 @@
Tags: #os, #configuration, #network, #virtualization, #container, #security, #authentication, #authorization, #sso
## Caddy - auth
Caddy is an open source reverse proxy (web server) which supports automatically to apply TLS certificates via ACME protocol from CA. It supports various module including dns module. However, the most important and fundamental services such as OPNsense, AdGuard Home, Step-CA, Authelia would not use caddy for independency.
### Secret management
- File:
- ~/data/config/secrets/.secret.yaml
- Edit `.secret.yaml` with `edit_secret.sh`
```yaml
# ~/data/config/secrets/.secret.yaml
# CADDY:
CADDY_ACME_KEY: acme-key_key_value (Only secret value)
CADDY_CROWDSEC_KEY: CADDY_LAPI_KEY
```
```bash
# Podman secret
extract_secret.sh .secret.yaml -f CADDY_ACME_KEY | podman secret create CADDY_ACME_KEY -
extract_secret.sh .secret.yaml -f CADDY_CROWDSEC_KEY | podman secret create CADDY_CROWDSEC_KEY -
```
### Preparation
#### iptables and firewall rules
- Set iptables first, following [here](../03_common/03_02_iptables.md).
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
#### Create directory for container
```bash
mkdir -p ~/data/containers/caddy-auth/{etc,data}
chmod -R 700 ~/data/containers/caddy-auth
```
> Caddy container executes as 0:0(root:root) permission in container. It is mapped host's UID. Therefore, directories don't have to have ACL via `setfacl`
#### Add new domain in BIND
Following [here](../06_network/06_03_net_bind.md).
- net server
- file: ~/data/containers/bind/lib/db.ilnmors.internal
```ini
# ...
login IN CNAME auth.ilnmors.internal.
# ...
```
```bash
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
systemctl --user restart bind
```
### Podman Image
#### Podman containerfile
Caddy supports various module for it. rfc2136(nsupdate) module, crowdsec will be used in this homelab project.
- file:
- ~/data/config/containers/caddy-auth/containerfile-caddy-2.10.2-auth
- ~/data/config/containers/caddy-auth/root_ca.crt
```containerfile
FROM caddy:2.10.2-builder-alpine AS builder
RUN xcaddy build \
--with github.com/caddy-dns/rfc2136 \
--with github.com/hslatman/caddy-crowdsec-bouncer/crowdsec \
--with github.com/hslatman/caddy-crowdsec-bouncer/http
FROM caddy:2.10.2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY ./root_ca.crt /usr/local/share/ca-certificates/root_ca.crt
RUN update-ca-certificates
```
#### Podman image build
```bash
podman build -t caddy:2.10.2-auth -f ~/data/config/containers/caddy-auth/containerfile-caddy-2.10.2-auth . && podman image prune -f
# Delete pure caddy and caddy-builder-alpine images after command above manually.
```
### Configuration files
Caddyfile will be updated after Authelia setting
```bash
# fix inconsistencies
podman exec caddy-auth caddy fmt --overwrite /etc/caddy/Caddyfile
# After Caddyfile setting is changed use this command.
podman exec caddy-auth caddy reload --config /etc/caddy/Caddyfile
```
- file:
- ~/data/containers/caddy-auth/etc/Caddyfile
- ~/data/containers/caddy-auth/certs/root_ca.crt
```ini
# Caddyfile
# ~/data/containers/caddy-auth/etc/Caddyfile
# Global option
{
# CrowdSec LAPI connection
crowdsec {
api_url https://crowdsec.ilnmors.internal:8080
api_key "{file./run/secrets/CADDY_CROWDSEC_KEY}"
}
}
# Snippets
# CrowdSec log for parser
(crowdsec_log) {
log {
output file /data/access.log {
mode 0640
roll_size 100MiB
roll_keep 1
}
}
}
# Private TLS ACME with DNS-01-challenge
(private_tls) {
tls {
issuer acme {
dir https://step-ca.ilnmors.internal:9000/acme/acme@ilnmors.internal/directory
dns rfc2136 {
server bind.ilnmors.internal:2253
key_name acme-key
key_alg hmac-sha256
key "{file./run/secrets/CADDY_ACME_KEY}"
}
}
}
}
test.ilnmors.com {
import crowdsec_log
route {
crowdsec
root * /usr/share/caddy
file_server
}
}
caddy.ilnmors.internal {
import private_tls
import crowdsec_log
route {
crowdsec
root * /usr/share/caddy
file_server
}
}
```
### Quadlet
- File:
- ~/data/config/containers/caddy-auth/caddy-auth.container
```ini
# ~/data/config/containers/caddy-auth/caddy-auth.container
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Caddy - auth
After=step-ca.service
Requires=step-ca.service
[Container]
Image=localhost/caddy:2.10.2-auth
ContainerName=caddy-auth
# To issue certificate from step-ca
AddHost=step-ca.ilnmors.internal:host-gateway
PublishPort=2080:80/tcp
PublishPort=2443:443/tcp
Volume=%h/data/containers/caddy-auth/etc:/etc/caddy:rw
Volume=%h/data/containers/caddy-auth/data:/data:rw
Environment="TZ=Asia/Seoul"
Secret=CADDY_ACME_KEY,target=/run/secrets/CADDY_ACME_KEY
Secret=CADDY_CROWDSEC_KEY,target=/run/secrets/CADDY_CROWDSEC_KEY
Label=diun.enable=true
Label=diun.watch_repo=true
# This label need configuration on `diun.yml`
Label=diun.regopt=caddy-auth-source
[Install]
WantedBy=default.target
```
#### Create systemd `.service` file
```bash
# linger has to be activated
ln -s ~/data/config/containers/caddy-auth/caddy-auth.container ~/.config/containers/systemd/caddy-auth.container
systemctl --user daemon-reload
```
#### Enable and start service
```bash
systemctl --user start caddy-auth.service
```
### Crowdsec bouncer and agent
- Following [here](../03_common/03_04_crowdsec.md).

View File

@@ -0,0 +1,315 @@
Tags: #os, #configuration, #network, #virtualization, #container, #security, #authentication, #authorization, #sso
## Authentik
Authentik is one of famous and strong open source idP solutions which support SSO, 2FA, LDAP, RADIUS, etc. It will be combining with Caddy-security module to apply SSO.
### Secret management
File:
- ~/data/config/secrets/.secret.yaml
- Edit `.secret.yaml` with `edit_secret.sh`
```yaml
# ~/data/config/secrets/.secret.yaml
# Authentik:
AUTHENTIK_SECRET_KEY: openssl rand -base64 32 value
AUTHENTIK_POSTGRESQL__PASSWORD: openssl rand -base64 32 value
```
```bash
# Podman secret
extract_secret.sh .secret.yaml -f AUTHENTIK_SECRET_KEY | podman secret create AUTHENTIK_SECRET_KEY -
extract_secret.sh .secret.yaml -f AUTHENTIK_POSTGRESQL__PASSWORD | podman secret create AUTHENTIK_POSTGRESQL__PASSWORD -
```
### Preparation
#### iptables and firewall rules
- Set iptables first, following [here](../03_common/03_02_iptables.md).
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
#### Create directory for container
```bash
mkdir -p ~/data/containers/authentik
chmod 700 ~/data/containers/authentik
setfacl -m d:g::0 ~/data/containers/authentik
setfacl -m d:o::0 ~/data/containers/authentik
setfacl -m u:auth:rwx ~/data/containers/authentik
setfacl -m u:100999:rwx ~/data/containers/authentik
setfacl -d -m u:auth:rwx ~/data/containers/authentik
setfacl -d -m u:100999:rwx ~/data/containers/authentik
mkdir -p ~/data/containers/authentik/{backups,certs,media,templates}
# After generating
sudo find ~/data/containers/authentik -type f -exec setfacl -m m::rw {} \;
sudo find ~/data/containers/authentik -type d -exec setfacl -m m::rwx {} \;
```
> Authentik container executes as 1000:1000(authentik:authentik) permission in container. It is mapped host's 100999. Therefore, directories have to have ACL via `setfacl`
#### Add new domain in BIND
Following [here](../06_network/06_03_net_bind.md).
- net server
- file: ~/data/containers/bind/lib/db.ilnmors.internal
```ini
# ...
authentik IN CNAME auth.ilnmors.internal.
# ...
```
```bash
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
systemctl --user restart bind
```
#### Add new database and user
- Following [here](../08_development/08_02_dev_postgresql.md). Authentik uses postgresql.
- dev server
```bash
podman exec -it -u postgres postgresql psql -U postgres
> # Create user and database
> CREATE USER authentik WITH PASSWORD '$AUTHENTIK_POSTGRESQL__PASSWORD_value';
> CREATE DATABASE authentik_db;
> ALTER DATABASE authentik_db OWNER TO authentik;
> \du
> \l
```
#### Add information in caddy-auth
- Following [here](./07_03_auth_main_caddy.md).
- auth server
- File: ~/data/containers/caddy-auth/etc/Caddyfile
```ini
authentik.ilnmors.internal {
import internal_tls
import crowdsec_log
reverse_proxy authentik.ilnmors.internal:9080
}
```
```bash
# After Caddyfile setting is changed use this command.
podman exec caddy-auth caddy reload --config /etc/caddy/Caddyfile
# fix inconsistencies
podman exec caddy-auth caddy fmt --overwrite /etc/caddy/Caddyfile
```
### Podman Image
```bash
podman pull ghcr.io/goauthentik/server:2025.10.0 # Do not use latest version to management
```
### Configuration file
- file:
- ~/data/containers/authentik/certs/root_ca.crt
### Quadlet
- File:
- ~/data/config/containers/authentik/authentik-pod.pod
- ~/data/config/containers/authentik/authentik.container
- ~/data/config/containers/authentik/authentik-worker.container
```ini
# ~/data/config/containers/authentik.pod
[Quadlet]
DefaultDependencies=false
[Pod]
PodName=authentik
# web port
PublishPort=9080:9000/tcp
# LDAP port
#PublishPort=[set_port]:3389
# Prometheus Port
#PublishPort=[set_port]:9300
```
```ini
# ~/data/config/containers/authentik/authentik.container
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Authentik - Server
After=caddy.service
Wants=caddy.service
[Service]
ExecStartPre=%h/data/config/scripts/wait-for-it.sh -h postgresql.ilnmors.internal -p 5432 -t 0
ExecStartPre=sleep 5
[Container]
Pod=authentik.pod
Image=ghcr.io/goauthentik/server:2025.10.0
ContainerName=authentik-server
# Change default http port from 9000 to 9080 - in pod
Volume=%h/data/containers/authentik/media:/media:rw
Volume=%h/data/containers/authentik/certs:/certs:ro
Volume=%h/data/containers/authentik/templates:/templates:rw
Volume=%h/data/containers/authentik/backups:/backups:rw
# Default
Environment="TZ=Asia/Seoul"
# Listen
#AUTHENTIK_LISTEN__HTTP=0.0.0.0:9000
# LDAP > 0.0.0.0:3389
# METRICS > 0.0.0.0:9300
# AUTHENTIK_LISTEN__TRUSTED_PROXY_CIDRS > `127.0.0.0/8`, `10.0.0.0/8`, `172.16.0.0/12`, `192.168.0.0/16`, `fe80::/10`, `::1/128`
# DB connection - This can be changed as hot-reloading, however adding or removing needs restart.
Environment="AUTHENTIK_POSTGRESQL__HOST=postgresql.ilnmors.internal"
Environment="AUTHENTIK_POSTGRESQL__PORT=5432"
# Password will be injected as secret
Environment="AUTHENTIK_POSTGRESQL__USER=authentik"
Environment="AUTHENTIK_POSTGRESQL__NAME=authentik_db"
# SSL DB configuration
Environment="AUTHENTIK_POSTGRESQL__SSLMODE=verify-full"
Environment="AUTHENTIK_POSTGRESQL__SSLROOTCERT=/certs/root_ca.crt"
# This homelab doesn't use mTLS, therefore do not set `AUTHENTIK_POSTGRESQL__SSLCERT` and `AUTHENTIK_POSTGRESQL__SSLKEY`
# Media configuration - 'file' or 's3'
Environment="AUTHENTIK_STORAGE__MEDIA_BACKEND=file"
# Email configuration - after generate local Email services
# AUTHENTIK_EMAIL__HOST=ilnmors.internal
# AUTHENTIK_EMAIL__PORT=25
# AUTHENTIK_EMAIL__USERNAME=authentik
# AUTHENTIK_EMAIL__USE_TLS=true
# AUTHENTIK_EMAIL__FROM=authentik@ilnmors.internal
Secret=AUTHENTIK_SECRET_KEY,type=env
Secret=AUTHENTIK_POSTGRESQL__PASSWORD,type=env
# Start server
Exec=server
Label=diun.enable=true
Label=diun.watch_repo=true
[Install]
WantedBy=default.target
```
```ini
# ~/data/config/containers/authentik/authentik-worker.container
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Authentik - worker
After=authentik-server.service
Requires=authentik-server.service
[Container]
Pod=authentik.pod
Image=ghcr.io/goauthentik/server:2025.10.0
ContainerName=authentik-worker
# Change default http port from 9000 to 9080 - in pod
Volume=%h/data/containers/authentik/media:/media:rw
Volume=%h/data/containers/authentik/certs:/certs:ro
Volume=%h/data/containers/authentik/templates:/templates:rw
Volume=%h/data/containers/authentik/backups:/backups:rw
# Default
Environment="TZ=Asia/Seoul"
# Listen
#AUTHENTIK_LISTEN__HTTP=0.0.0.0:9000
# LDAP > 0.0.0.0:3389
# METRICS > 0.0.0.0:9300
# AUTHENTIK_LISTEN__TRUSTED_PROXY_CIDRS > `127.0.0.0/8`, `10.0.0.0/8`, `172.16.0.0/12`, `192.168.0.0/16`, `fe80::/10`, `::1/128`
# DB connection - This can be changed as hot-reloading, however adding or removing needs restart.
Environment="AUTHENTIK_POSTGRESQL__HOST=postgresql.ilnmors.internal"
Environment="AUTHENTIK_POSTGRESQL__PORT=5432"
# Password will be injected as secret
Environment="AUTHENTIK_POSTGRESQL__USER=authentik"
Environment="AUTHENTIK_POSTGRESQL__NAME=authentik_db"
# SSL DB configuration
Environment="AUTHENTIK_POSTGRESQL__SSLMODE=verify-full"
Environment="AUTHENTIK_POSTGRESQL__SSLROOTCERT=/certs/root_ca.crt"
# This homelab doesn't use mTLS, therefore do not set `AUTHENTIK_POSTGRESQL__SSLCERT` and `AUTHENTIK_POSTGRESQL__SSLKEY`
# Media configuration - 'file' or 's3'
Environment="AUTHENTIK_STORAGE__MEDIA_BACKEND=file"
# Email configuration - after generate local Email services
# AUTHENTIK_EMAIL__HOST=ilnmors.internal
# AUTHENTIK_EMAIL__PORT=25
# AUTHENTIK_EMAIL__USERNAME=authentik
# AUTHENTIK_EMAIL__USE_TLS=true
# AUTHENTIK_EMAIL__FROM=authentik@ilnmors.internal
Secret=AUTHENTIK_SECRET_KEY,type=env
Secret=AUTHENTIK_POSTGRESQL__PASSWORD,type=env
# Start worker
Exec=worker
[Install]
WantedBy=default.target
```
> All configurations, except DB connection information will be saved in postgresql. Do not set Environment varibles. It has the first priority, it will override all configuration from web UI.
#### Create systemd `.service` file
```bash
# linger has to be activated
ln -s ~/data/config/containers/authentik/authentik-pod.pod ~/.config/containers/systemd/authentik-pod.pod
ln -s ~/data/config/containers/authentik/authentik-server.container ~/.config/containers/systemd/authentik.container
ln -s ~/data/config/containers/authentik/authentik-worker.container ~/.config/containers/systemd/authentik-worker.container
systemctl --user daemon-reload
```
#### Enable and start service
```bash
systemctl --user start authentik.service
```
### Web UI configuration
#### Access web UI and initial setting
- URL: https://authentik.ilnmors.internal/if/flow/initial-setup/
> If you can't initialize the authentik at the first time, `DROP DATABASE authentik_db;` and recreate database after authentik container stop.
#### Initial setting wizard
- Email: Admin-email
- thiswork21@gmail.com
- Password: password

View File

@@ -0,0 +1,386 @@
Tags: #os, #configuration, #network, #virtualization, #container, #security, #authentication, #authorization, #sso
## LLDAP (Light LDAP)
LLDAP provides LDAP protocol as very light and modern way. It supports not only main function of LDAP protocol such as Group, or User management but also web UI and RESTFul API. Additionally, it is very simple to set and manage and takes a small amount of resources. Following [here](../02_theory/02_05_sso.md) about the structure of LDAP.
### Secret management
File:
- ~/data/config/secrets/.secret.yaml
- Edit `.secret.yaml` with `edit_secret.sh`
```yaml
# ~/data/config/secrets/.secret.yaml
# LLDAP:
LLDAP_DATABASE_URL: postgres://ldap:$PASSWORD@postgresql.ilnmors.internal/ldap_db?sslmode=verify-full&sslrootcert=/etc/ssl/ldap/root_ca.crt # $PASSWORD=openssl rand -base64 32 value and it should be encoded as URL
LLDAP_LDAP_USER_PASSWORD: $PASSWORD
LLDAP_KEY_SEED: "$(LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32)"
LLDAP_JWT_SECRET: "$(LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32)"
```
```bash
# Podman secret
extract_secret.sh .secret.yaml -f LLDAP_DATABASE_URL | podman secret create LLDAP_DATABASE_URL -
extract_secret.sh .secret.yaml -f LLDAP_LDAP_USER_PASSWORD | podman secret create LLDAP_LDAP_USER_PASSWORD -
extract_secret.sh .secret.yaml -f LLDAP_JWT_SECRET | podman secret create LLDAP_JWT_SECRET -
extract_secret.sh .secret.yaml -f LLDAP_KEY_SEED | podman secret create LLDAP_KEY_SEED -
```
### Preparation
#### iptables and firewall rules
- Set iptables first, following [here](../03_common/03_02_iptables.md).
- 636:6360: localhost nat
- LDAPS will be only used localhost (Authelia), so PREROUTING doesn't require in nat table. nly OUTPUT requires in it.
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
#### Create directory for container
```bash
mkdir -p ~/data/containers/ldap
chmod 700 ~/data/containers/ldap
setfacl -m d:g::0 ~/data/containers/ldap
setfacl -m d:o::0 ~/data/containers/ldap
setfacl -m u:auth:rwx ~/data/containers/ldap
setfacl -m u:100999:rwx ~/data/containers/ldap
setfacl -d -m u:auth:rwx ~/data/containers/ldap
setfacl -d -m u:100999:rwx ~/data/containers/ldap
mkdir ~/data/containers/ldap/{certs,data}
# After generating
sudo find ~/data/containers/ldap -type f -exec setfacl -m m::rw {} \;
sudo find ~/data/containers/ldap -type d -exec setfacl -m m::rwx {} \;
```
> lldap container executes as 1000:1000(lldap:lldap) permission in container. It is mapped host's 100999. Therefore, directories have to have ACL via `setfacl`
#### Add new domain in BIND
Following [here](../06_network/06_03_net_bind.md).
- net server
- file: ~/data/containers/bind/lib/db.ilnmors.internal
```ini
# ...
ldap IN CNAME auth.ilnmors.internal.
# ...
```
```bash
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
systemctl --user restart bind
```
#### Add new database and user
- Following [here](../08_development/08_02_dev_postgresql.md). LLDAP can use postgresql (Basic is SQLite).
- dev server
```bash
podman exec -it -u postgres postgresql psql -U postgres
> # Create user and database
> CREATE USER ldap WITH PASSWORD '$POSTGRES_LDAP_PASSWORD';
> CREATE DATABASE ldap_db;
> ALTER DATABASE ldap_db OWNER TO ldap;
> \du
> \l
```
#### Add information in caddy-auth
- Following [here](./07_03_auth_main_caddy.md).
- auth server
- File: ~/data/containers/caddy-auth/etc/Caddyfile
```ini
ldap.ilnmors.internal {
import internal_tls
import crowdsec_log
reverse_proxy host.containers.internal:17170
}
```
```bash
# fix inconsistencies
podman exec caddy-auth caddy fmt --overwrite /etc/caddy/Caddyfile
# After Caddyfile setting is changed use this command.
podman exec caddy-auth caddy reload --config /etc/caddy/Caddyfile
```
#### Certificates configuration
- File: ~/data/containers/certs/root_ca.crt
- ACME setting (OPNsense)
- Services:ACME Client:Certificates - Certificates - \[+\]
- Common Name: ldap.ilnmors.internal
- Description: ldap
- ACME Account: ldap.ilnmors.internal
> Even though provisioner's name includes `@`, it has to use as `.`.
>
> i.e. `acme@ilnmors.internal` > `acme.ilnmors.internal`
- Challenge Type: ilnmors.internal-dns-01-challenge
- \[\*\] Auto Renewal
- Automations: ldap-auto-acme, ldap-auto-restart
- Automations (OPNsense)
- Services:ACME Client:Automations - Automation - \[+\]
- Name: ldap-auto-acme / ldap-auto-reload
- Description: ldap acme crt issue / reload ldap after crt is issued
- Run Command: Upload certificate via SFTP / Remote command via SSH
- SFTP Host: ldap.ilnmors.internal
- Username: auth
- Identity Type: ed25519
- Remote Path(SFTP): /home/auth/data/containers/ldap/certs
- Command(SSH): setfacl -m m::r /home/auth/data/containers/ldap/certs/ldap.ilnmors.internal/* && systemctl --user restart ldap.service
- `Show Identity`
> Copy Required parameters `ssh-ed25519 ~~~ root@opnsense.ilnmors.internal`
>
> Add parameters in net server's ~/.ssh/authorized_keys
- `Test Connect` and `Save`
- SSH command will be applied after postgresql start.
### Podman Image
```bash
podman pull lldap/lldap:v0.6.2 # Do not use latest version to management
```
### Configuration file
- file:
- ~/data/containers/ldap/certs/root_ca.crt
- ~/data/containers/ldap/data/lldap_config.toml
### Initiating
```bash
podman run --rm \
--secret LLDAP_DATABASE_URL,type=env \
--secret LLDAP_KEY_SEED,type=env \
--secret LLDAP_JWT_SECRET,type=env \
--secret LLDAP_LDAP_USER_PASSWORD,type=env \
-e TZ="Asia/Seoul" \
-e LLDAP_LDAP_BASE_DN="dc=ilnmors,dc=internal" \
-v "$HOME"/data/containers/ldap/data:/data:rw \
-v "$HOME"/data/containers/ldap/certs:/etc/ssl/ldap:ro \
lldap/lldap:v0.6.2
# `Ctrl + C` exit
podman secret rm LLDAP_LDAP_USER_PASSWORD
```
### Quadlet
- File:
- ~/data/config/containers/ldap/ldap.container
```ini
# ~/data/config/containers/ldap/ldap.container
[Quadlet]
DefaultDependencies=false
[Unit]
Description=LDAP
After=caddy.service
Wants=caddy.service
[Service]
ExecStartPre=%h/data/config/scripts/wait-for-it.sh -h postgresql.ilnmors.internal -p 5432 -t 0
ExecStartPre=sleep 5
[Container]
Image=lldap/lldap:v0.6.2
ContainerName=ldap
# For LDAPS - 636 > 6360 iptables
PublishPort=6360:6360/tcp
# Web UI
PublishPort=17170:17170/tcp
Volume=%h/data/containers/ldap/data:/data:rw
Volume=%h/data/containers/ldap/certs:/etc/ssl/ldap:ro
# Default
Environment="TZ=Asia/Seoul"
# Domain
Environment="LLDAP_LDAP_BASE_DN=dc=ilnmors,dc=internal"
# LDAPS
Environment="LLDAP_LDAPS_OPTIONS__ENABLED=true"
Environment="LLDAP_LDAPS_OPTIONS__CERT_FILE=/etc/ssl/ldap/ldap.ilnmors.internal/fullchain.pem"
Environment="LLDAP_LDAPS_OPTIONS__KEY_FILE=/etc/ssl/ldap/ldap.ilnmors.internal/key.pem"
# SMTP options > you can set all of these at the /data/config.toml instead of Environment
# Only `LLDAP_SMTP_OPTIONS__PASSWORD` will be injected by secret
# LLDAP_SMTP_OPTIONS__ENABLE_PASSWORD_RESET=true
# LLDAP_SMTP_OPTIONS__SERVER=smtp.example.com
# LLDAP_SMTP_OPTIONS__PORT=465
# LLDAP_SMTP_OPTIONS__SMTP_ENCRYPTION=TLS
# LLDAP_SMTP_OPTIONS__USER=no-reply@example.com
# LLDAP_SMTP_OPTIONS__PASSWORD=PasswordGoesHere
# LLDAP_SMTP_OPTIONS__FROM=no-reply <no-reply@example.com>
# LLDAP_SMTP_OPTIONS__TO=admin <admin@example.com>
# Database
Secret=LLDAP_DATABASE_URL,type=env
# Secrets
Secret=LLDAP_KEY_SEED,type=env
Secret=LLDAP_JWT_SECRET,type=env
Label=diun.enable=true
Label=diun.watch_repo=true
[Install]
WantedBy=default.target
```
#### Create systemd `.service` file
```bash
# linger has to be activated
ln -s ~/data/config/containers/ldap/ldap.container ~/.config/containers/systemd/ldap.container
systemctl --user daemon-reload
```
#### Enable and start service
```bash
systemctl --user start ldap.service
```
### DB backup
- Following [here](../08_development/08_02_dev_postgresql.md).
- dev server
```bash
systemctl --user enable --now postgresql-data-backup@ldap.timer
```
> The data saved in postgresql are all LDAP data including all users, groups and hashed passwords. However, LLDAP_KEY_SEED and LLDAP_JWT_SECRET are not saved in postgresql. It is for container itself and it is used as Environment value in the container.
### Configuration
#### Access web UI and Login
- URL: https://ldap.ilnmors.internal
- ID: admin
- PW: $LLDAP_LDAP_USER_PASSWORD
#### Create the groups
- Groups - \[\+\] Create a group
- Group: admins
- Group: users
#### Create the authelia user
- Users: \[\+\] Create a user
- Username (cn; uid): authelia
- Display name: Authelia
- First Name: Authelia
- Last Name (sn): Service
- Email (mail): authelia@ilnmors.internal
- Password: "$(openssl rand -base64 32)"
- lldap_strict_readonly \[Add to group\]
- This group allow search authority.
> Save the password in .secret.yaml
#### Create the normal user
- Users: \[\+\] Create a user
- Username (cn; uid): user
- First Name: John
- Last Name (sn): Doe
- Email (mail): john_doe@ilnmors.internal
- Password: "$PASSWORD"
- (admins|users) \[Add to group\]
> Custom schema in `User schema`, `Gropu schema` doesn't need to be added. This is for advanced function to add additional value such as `identity number` or `phone number`. Hardcoded schema, which means basic schema the lldap provides is enough to use Authelia.
> After all these steps, now you can integrate the Authelia for SSO.
### Usage of LDAP
#### Service Bind
LDAP call `login` as Bind. When the authelia Bind to the LDAP server, it can get the authority to search in `lldap_strict_readonly` group.
#### Search
authelia account has the authority to search, it can search to send the query.
##### Flow of search
- Client (authelia) sends the query
- `uid=user in dc=ilnmors,dc=ilnmors`
- LDAP server searches the DN of entry
- `uid=user,ou=people,dc=ilnmors,dc=ilnmors`
- LDAP sends the DN to Client (authelia)
### Authelia's work flow
#### First login
##### User login query
User try to login on login page of Authelia.
- id: user
- password: 1234
##### Service Bind (Bind and search)
authelia binds to LLDAP server based on the information in configuration.yml.
- dn: authelia
- password: authelia's password
##### Search
authelia sends the query to LLDAP after bind.
- `uid=user in dc=ilnmors,dc=internal`
##### Request
LLDAP server searches the entry and send the DN information query to authelia.
- `uid=user,ou=people,dc=ilnmors,dc=internal`
#### Verify the user login (Second login)
##### User Bind (Bind only)
authelia tries to bind LLDAP server based on the information that user input.
- dn: requested uid
- password: 1234
##### Verification from LLDAP
LLDAP verify the password from authelia with its hash value saved in LLDAP's database.
##### Request
LLDAP server sends the result as `Success` or `Fail`.
> Search authority is basic authority of user who binds to LDAP server. It is just the way to check success or fail bind is the charge of Authelia.

View File

@@ -0,0 +1,651 @@
Tags: #os, #configuration, #network, #virtualization, #container, #security, #authentication, #authorization, #sso
## Authelia
Authelia is one of the open source authentication and authorization server which can management IAM(Identity and Access Management). It supports SSO and even OpenID Connect 1.0 Provider (idP) on OAuth 2.0. Authelia uses its backend database as the file or LDAP server. LLDAP server is this backend database for authelia in this homelab.
### Secret management
File:
- ~/data/config/secrets/.secret.yaml
- Edit `.secret.yaml` with `edit_secret.sh`
```yaml
# ~/data/config/secrets/.secret.yaml
# Authelia:
AUTHELIA_JWT_SECRET: "$(LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32)"
AUTHELIA_SESSION_SECRET: "$(LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32)"
AUTHELIA_STORAGE_SECRET: "$(LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32)"
AUTHELIA_HMAC_SECRET: "$(openssl rand -base64 32)"
AUTHELIA_JWKS_RS256: |
"$(podman run --rm \
-v ~/data/containers/authelia/certs:/output:rw \
authelia/authelia:4.39.13 \
authelia crypto pair ecdsa \
generate --directory /output/ecdsa)"
AUTHELIA_JWKS_ES256: |
"$(podman run --rm \
-v ~/data/containers/authelia/certs:/output:rw \
authelia/authelia:4.39.13 \
authelia crypto pair rsa \
generate --directory /output/rsa)"
# content of private.pem and rm private.pem/ leave public.pem
# This is set on LLDAP server
AUTHELIA_LDAP_PASSWORD: "$(openssl rand -base64 32)"
POSTGRES_AUTHELIA_PASSWORD: "$(openssl rand -base64 32)"
```
```bash
# Podman secret
extract_secret.sh .secret.yaml -f AUTHELIA_JWT_SECRET | podman secret create AUTHELIA_JWT_SECRET -
extract_secret.sh .secret.yaml -f AUTHELIA_SESSION_SECRET | podman secret create AUTHELIA_SESSION_SECRET -
extract_secret.sh .secret.yaml -f AUTHELIA_STORAGE_SECRET | podman secret create AUTHELIA_STORAGE_SECRET -
extract_secret.sh .secret.yaml -f AUTHELIA_HMAC_SECRET | podman secret create AUTHELIA_HMAC_SECRET -
extract_secret.sh .secret.yaml -f AUTHELIA_LDAP_PASSWORD | podman secret create AUTHELIA_LDAP_PASSWORD -
extract_secret.sh .secret.yaml -f POSTGRES_AUTHELIA_PASSWORD | podman secret create POSTGRES_AUTHELIA_PASSWORD -
extract_secret.sh .secret.yaml -f AUTHELIA_JWKS_RS256 | podman secret create AUTHELIA_JWKS_RS256 -
extract_secret.sh .secret.yaml -f AUTHELIA_JWKS_ES256 | podman secret create AUTHELIA_JWKS_ES256 -
```
### Preparation
#### iptables and firewall rules
- Set iptables first, following [here](../03_common/03_02_iptables.md).
- 636:6360: localhost nat
- LDAPS will be only used localhost (Authelia), so PREROUTING doesn't require in nat table. nly OUTPUT requires in it.
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
#### Create directory for container
```bash
mkdir -p ~/data/containers/authelia/{certs,config}
chmod -R 700 ~/data/containers/authelia
```
> authelia container executes as 0:0(root:root) permission in container. It is mapped host's UID. Therefore, directories don't have to have ACL via `setfacl`
#### Add new domain in BIND
Following [here](../06_network/06_03_net_bind.md).
- net server
- file: ~/data/containers/bind/lib/db.ilnmors.internal
```ini
# ...
authelia IN CNAME auth.ilnmors.internal.
# ...
```
```bash
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
systemctl --user restart bind
```
#### Add new database and user
- Following [here](../08_development/08_02_dev_postgresql.md). LLDAP can use postgresql (Basic is SQLite).
- dev server
```bash
podman exec -it -u postgres postgresql psql -U postgres
> # Create user and database
> CREATE USER authelia WITH PASSWORD '$POSTGRES_AUTHELIA_PASSWORD';
> CREATE DATABASE authelia_db;
> ALTER DATABASE authelia_db OWNER TO authelia;
> \du
> \l
```
#### Add information in caddy-auth
- Following [here](./07_03_auth_main_caddy.md).
- auth server
- File: ~/data/containers/caddy-auth/etc/Caddyfile
```ini
authelia.ilnmors.com {
import crowdsec_log
route {
crowdsec
reverse_proxy host.containers.internal:9091
}
}
authelia.ilnmors.internal {
import internal_tls
import crowdsec_log
route {
crowdsec
reverse_proxy host.containers.internal:9091
}
}
```
```bash
# fix inconsistencies
podman exec caddy-auth caddy fmt --overwrite /etc/caddy/Caddyfile
# After Caddyfile setting is changed use this command.
podman exec caddy-auth caddy reload --config /etc/caddy/Caddyfile
```
### Podman Image
```bash
podman pull authelia/authelia:4.39.13 # Do not use latest version to management
```
### Configuration file
- file:
- ~/data/containers/authelia/certs/root_ca.crt
- ~/data/containers/authelia/config/configuration.yml
#### configuration.yml
```yaml
# authelia configuration.yml
---
# certificates setting
certificates_directory: '/etc/ssl/authelia/'
# them setting - light, dark, grey, auto.
theme: 'auto'
# Server configuration
server:
# TLS will be applied on caddy
address: 'tcp://:9091/'
# Log configuration
log:
level: 'debug'
#file_path: 'path/of/log/file' - without this option, using stdout
# TOTP configuration
totp:
# issure option is for 2FA app. It works as identifier. "My homelab' or 'ilnmors.internal', 'Authelia - ilnmors'
issuer: 'ilnmors.internal'
# Identity validation confituration
identity_validation:
reset_password:
jwt_secret: '' # $AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET_FILE
# Authentication backend provider configuration
authentication_backend:
ldap:
# ldaps uses 636 -> NAT automatically change port 636 in output packet -> 6360 which lldap server uses.
address: 'ldaps://ldap.ilnmors.internal'
implementation: 'lldap'
# tls configruation, it uses certificates_directory's /etc/ssl/authelia/root_ca.crt
tls:
server_name: 'ldap.ilnmors.internal'
skip_verify: false
# LLDAP base DN
base_dn: 'dc=ilnmors,dc=internal'
additional_users_dn: 'ou=people'
additional_groups_dn: 'ou=groups'
# LLDAP filters
users_filter: '(&(|({username_attribute}={input})({mail_attribute}={input}))(objectClass=person))'
groups_filter: '(&(member={dn})(objectClass=groupOfNames))'
# LLDAP bind account configuration
user: 'uid=authelia,ou=people,dc=ilnmors,dc=internal'
password: '' # $AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
# LLDAP schema mapping
attributes:
username: 'uid'
display_name: 'displayName'
mail: 'mail'
group_name: 'cn'
# Access control configuration
access_control:
default_policy: 'deny'
rules:
# authelia portal
- domain: 'authelia.ilnmors.internal'
policy: 'bypass'
- domain: 'authelia.ilnmors.com'
policy: 'bypass'
# Session provider configuration
session:
secret: '' # $AUTHELIA_SESSION_SECRET_FILE
expiration: '24 hours' # Session maintains for 24 hours
inactivity: '2 hours' # Session maintains for 2 hours without actions
cookies:
- name: 'authelia_private_session'
domain: 'ilnmors.internal'
authelia_url: 'https://authelia.ilnmors.internal'
same_site: 'lax'
- name: 'authelia_public_session'
domain: 'ilnmors.com'
authelia_url: 'https://authelia.ilnmors.com'
same_site: 'lax'
# This authelia doesn't use Redis.
# Storage provider configuration
storage:
encryption_key: '' # $AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE
postgres:
address: 'tcp://postgresql.ilnmors.internal:5432'
database: 'authelia_db'
username: 'authelia'
password: '' # $AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE
tls:
server_name: 'postgresql.ilnmors.internal'
skip_verify: false
# Notification provider
notifier:
filesystem:
filename: '/config/notification.txt'
# Following goal, After `Postfix` and `Dovecot` setting.
#smtp:
#address: 'smtp.ilnmors.internal'
# OIDC preperation
# Identity provisioner configuration
#identity_providers:
# oidc:
# hmac_secret: '' # $AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
# jwks:
# - algorithm: 'RS256'
# use: 'sig'
# key: {{/* {{ secret "/run/secrets/AUTHELIA_JWKS_RS256" | mindent 10 "|" | msquote }} /*}}
# - algorithm: 'ES256'
# use: 'sig'
# key: {{/* {{ secret "/run/secrets/AUTHELIA_JWKS_ES256" | mindent 10 "|" | msquote }} /*}}
# clients:
...
```
### Quadlet
- File:
- ~/data/config/containers/authelia/authelia.container
```ini
# ~/data/config/containers/authelia/authelia.container
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Authelia
After=caddy.service
Wants=caddy.service
After=ldap.service
Requires=ldap.service
[Service]
ExecStartPre=%h/data/config/scripts/wait-for-it.sh -h localhost -p 6360 -t 0
ExecStartPre=sleep 5
[Container]
Image=authelia/authelia:4.39.13
ContainerName=authelia
AddHost=ldap.ilnmors.internal:host-gateway
# Web UI
PublishPort=9091:9091/tcp
Volume=%h/data/containers/authelia/config:/config:rw
Volume=%h/data/containers/ldap/certs:/etc/ssl/authelia:ro
# Default
Environment="TZ=Asia/Seoul"
# Enable Go template engine
# !CAUTION!
# If this environment were enabled, you would have to use {{/* ... /*}} for {{ go_filter }} options. Go engine always processes its own grammar first.
Environment="X_AUTHELIA_CONFIG_FILTERS=template"
# Encryption
## JWT
Environment="AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET_FILE=/run/secrets/AUTHELIA_JWT_SECRET"
Secret=AUTHELIA_JWT_SECRET,target=/run/secrets/AUTHELIA_JWT_SECRET
## Session
Environment="AUTHELIA_SESSION_SECRET_FILE=/run/secrets/AUTHELIA_SESSION_SECRET"
Secret=AUTHELIA_SESSION_SECRET,target=/run/secrets/AUTHELIA_SESSION_SECRET
## Storage
Environment="AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE=/run/secrets/AUTHELIA_STORAGE_SECRET"
Secret=AUTHELIA_STORAGE_SECRET,target=/run/secrets/AUTHELIA_STORAGE_SECRET
# OIDC (HMAC, JWKS)
# Environment="AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE=/run/secrets/AUTHELIA_HMAC_SECRET"
# Secret=AUTHELIA_HMAC_SECRET,target=/run/secrets/AUTHELIA_HMAC_SECRET
# Secret=AUTHELIA_JWKS_RS256,target=/run/secrets/AUTHELIA_JWKS_RS256
# Secret=AUTHELIA_JWKS_ES256,target=/run/secrets/AUTHELIA_JWKS_ES256
# LDAP
Environment="AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE=/run/secrets/AUTHELIA_LDAP_PASSWORD"
Secret=AUTHELIA_LDAP_PASSWORD,target=/run/secrets/AUTHELIA_LDAP_PASSWORD
# Database
Environment="AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE=/run/secrets/POSTGRES_AUTHELIA_PASSWORD"
Secret=POSTGRES_AUTHELIA_PASSWORD,target=/run/secrets/POSTGRES_AUTHELIA_PASSWORD
Label=diun.enable=true
Label=diun.watch_repo=true
[Install]
WantedBy=default.target
```
#### Create systemd `.service` file
```bash
# linger has to be activated
ln -s ~/data/config/containers/authelia/authelia.container ~/.config/containers/systemd/authelia.container
systemctl --user daemon-reload
```
#### Enable and start service
```bash
systemctl --user start authelia.service
```
### DB backup
- Following [here](../08_development/08_02_dev_postgresql.md).
- dev server
```bash
systemctl --user enable --now postgresql-data-backup@authelia.timer
```
### Verification
- Web UI:
- https://authelia.ilnmors.internal
- https://authelia.ilnmors.com
- Login with LLDAP's User
- Login: LLDAP User
- Password: LLDAP Password
- Check the session
### Apply Forward_Auth
Basically, OIDC will be used in this homelab for the application which supports OIDC natively. However, some applications don't support OIDC or even login system. Therfore, Forward_Auth function in Caddy is used for authentication of those services.
#### Configuration files
- File:
- ~/data/containers/caddy-auth/etc/Caddyfile
- ~/data/containers/authelia/config/configuration.yml
#### Caddy
```ini
# Caddyfile
# ...
# Foward auth test in local
test-admin.ilnmors.com {
import crowdsec_log
route {
crowdsec
forward_auth host.containers.internal:9091 {
# Authelia Forward Auth endpoint URI
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
# back end service
root * /usr/share/caddy
file_server
}
}
# Forward auth test in local
test-default.ilnmors.com {
import crowdsec_log
route {
crowdsec
forward_auth host.containers.internal:9091 {
# Authelia Forward Auth endpoint URI
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
# back end service
root * /usr/share/caddy
file_server
}
}
# ...
```
#### Authelia
```yaml
# configuration.yml
# ...
# Access control configuration
access_control:
default_policy: 'deny'
rules:
# authelia portal
- domain: 'authelia.ilnmors.internal'
policy: 'bypass'
- domain: 'authelia.ilnmors.com'
policy: 'bypass'
- domain: 'test-admin.ilnmors.com'
policy: 'one_factor'
# Access control for Forward_Auth
subject:
- 'group:admins'
- domain: 'test-default.ilnmors.com'
policy: 'one_factor'
# Access control for Forward_Auth
subject:
- 'group:admins'
- 'group:users'
session:
secret: '' # $AUTHELIA_SESSION_SECRET_FILE
cookies:
- name: 'authelia_internal_session'
domain: 'ilnmors.internal'
authelia_url: 'https://authelia.ilnmors.internal'
# When login succeed, redirect
# default_redirection_url: 'https://authelia.ilnmors.internal'
same_site: 'lax'
- name: 'authelia_com_session'
domain: 'ilnmors.com'
authelia_url: 'https://authelia.ilnmors.com'
# default_redirection_url: 'https://authelia.ilnmors.com'
same_site: 'lax'
# ...
```
#### Verification
- https://test-admin.ilnmors.com
- user_test (gruop: users): 403 Forbidden
- admin_test (group: admins): File server
- https://test-default.ilnmors.com
- admin_test (Just move): File server
- user_test (Session initiating): File server
---
### OIDC
```bash
openssl rand -base64 32
# Add this value to .secret.yaml
# APP_OIDC_KEY: secret value
# Make the hash value of this secret
# It is needed or not depends on app
extract_secret.sh .secret.yaml -f APP_OIDC_KEY | podman secret create APP_OIDC_KEY -
# Copy this value and paste next to client_secret: in configuration.yml
# Hash value generate
extract_secret.sh .secret.yaml -f CADDY_OIDC_KEY
> secret value
podman run --rm \
authelia/authelia:4.39.13 authelia
rypto hash generate --password secret_value --no-confirm
# Hash value validate
podman run --rm authelia/authelia:4.39.13 authelia crypto hash validate --password secret_value -- 'HASH_VALUE'
> The password matches the digest.
```
##### Authelia
- Remove annotation of OIDC in container file
```ini
# ~/data/config/containers/authelia/authelia.container
# ...
# OIDC (HMAC, JWKS)
Environment=AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE=/run/secrets/AUTHELIA_HMAC_SECRET"
Secret=AUTHELIA_HMAC_SECRET,target=/run/secrets/AUTHELIA_HMAC_SECRET
Secret=AUTHELIA_JWKS_RS256,target=/run/secrets/AUTHELIA_JWKS_RS256
Secret=AUTHELIA_JWKS_ES256,target=/run/secrets/AUTHELIA_JWKS_ES256
# Remove annotation
# ...
```
- Fix the configuration.yml
```yaml
---
# certificates setting
# ...
# Identity provisioner configuration
identity_providers:
oidc:
hmac_secret: '' # $AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
jwks:
- algorithm: 'RS256'
use: 'sig'
key: {{ secret "/run/secrets/AUTHELIA_JWKS_RS256" | mindent 10 "|" | msquote }}
- algorithm: 'ES256'
use: 'sig'
key: {{ secret "/run/secrets/AUTHELIA_JWKS_ES256" | mindent 10 "|" | msquote }}
clients:
- client_id: 'app'
client_name: 'app'
# It depends on application
client_secret: 'HASH_VALUE'
# If there were not client secret, public should be `true`
public: \[ true | false \]
response_types:
- 'code'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
redirect_uris:
- 'https://app.ilnmors.com/oauth2/callback
- 'https://app.ilnmors.com/'
token_endpoint_auth_method: 'client_secret_post | client_secret_basic'
authorization_policy: 'one_factor'
...
```
```bash
# restart the service
systemctl --user daemon-reload
systemctl --user restart authelia
```
#### Caddy
- Add the reverse proxy.
```ini
# Caddyfile
# ~/data/containers/caddy-auth/etc/Caddyfile
# ...
app.ilnmors.com {
import crowdsec_log
route {
crowdsec
# X-Forward-Host Domain doesn't need to appy on DNS
header_up X-Forwarded-Host app.app.ilnmors.internal
reverse_porxy app.ilnmors.internal
}
}
# ...
```
```ini
# app server's sidecar caddy
app.ilnmors.internal
{
import internal_tls
@notes header X-Forwarded-Host app.app.ilnmors.internal
reverse_proxy @notes host.containers.internal:3000
}
```
- Fix Caddyfile format and restart service
```bash
# fix inconsistencies
podman exec caddy-auth caddy fmt --overwrite /etc/caddy/Caddyfile
# After Caddyfile setting is changed use this command.
systemctl --user daemon-reload
systemctl --user restart caddy-auth
```
#### Verification
- Following.