1.0.0 Release IaaS

This commit is contained in:
2026-03-15 04:41:02 +09:00
commit a7365da431
292 changed files with 36059 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
# Firmware installation
Cloud-init has no firmware in it, so i915 driver is needed to install.
When the app node is initiated by ansible, `firmware-intel-graphics` and `intel-media-va-driver-non-free` are installed. And only when they are installed, `update-initramfs -u` and `reboot` modules run as a handler.
## Verification
After reboot, check the render device.
```bash
ls -l /dev/dri
# crw-rw---- 1 root video 226, 0 ... card0
# crw-rw---- 1 root render 226, 128 ... renderD128

View File

@@ -0,0 +1,35 @@
# Alloy
## Communication
Alloy runs on systemd \(host\), and postgresql runs as container \(rootless podman\). When host system and container communicate, container recognizes host system as host-gateway \(Link local address\).
## postgresql monitor
### Monitor exporter
```sql
postgres=# CREATE USER alloy WITH PASSWORD 'password';
CREATE ROLE
postgres=# GRANT pg_monitor TO alloy;
GRANT ROLE
postgres=# \drg
List of role grants
Role name | Member of | Options | Grantor
-----------+------------+--------------+----------
alloy | pg_monitor | INHERIT, SET | postgres
(1 row)
```
### pg_hba.conf
```conf
hostssl postgres alloy {{ hostvars['fw']['network4']['infra']['server'] }}/32 trust
hostssl postgres alloy {{ hostvars['fw']['network6']['infra']['server'] }}/128 trust
hostssl postgres alloy {{ hostvars['fw']['network4']['subnet']['lla'] }} trust
hostssl postgres alloy {{ hostvars['fw']['network6']['subnet']['lla'] }} trust
```
### check
```bash
curl http://localhost:12345/metrics
```

View File

@@ -0,0 +1,45 @@
# Caddy
## TLS re-encryption
This is not a perfect E2EE communication theorogically, however technically it is. The main caddy decrypt as an edge node of WAN side, and it becomes a client of side caddy with private certificate.
### .com public domain
WAN - \(Let's Encrypt certificate\) -> Caddy \(auth\) - \(ilnmors internal certificate\) -> Caddy \(app\) or https services - http -> app's local service
### .internal private domain
client - \(ilnmors internal certificate\) -> Caddy \(Infra\) - http -> local services
### DNS record
*.app.ilnmors.internal - CNAME -> app.ilnmors.internal
## X-Forwarded-Host
When caddy in app conducts TLS re-encryption, it is important to change their Host header as X-Forwarded-Host haeder for session maintainance.
## Example
```ini
# Auth server
test.ilnmors.com
{
import crowdsec_log
route {
crowdsec
reverse_proxy https://test.app.ilnmors.internal
}
}
# App server
test.app.ilnmors.internal
{
import internal_tls
trusted_proxies {{ hostvars['fw']['network4']['auth']['server'] }} {{ hostvars['fw']['network6']['auth']['server'] }}
route {
reverse_proxy host.containers.internal:3000 {
header_up Host {header.X-Forwarded-Host} {Host}
}
}
}
```

View File

@@ -0,0 +1,233 @@
# Crowdsec
## LAPI
### Detecting
Host logs \> CrowdSec Agent\(parser\) > CrowdSec LAPI
### Decision
CrowdSec LAPI \(Decision + Register\)
### Block
CrowdSec LAPI \> CrowdSec Bouncer \(Block\)
## CAPI
CrowdSec CAPI \> crowdsec LAPI \(local\) \> CrowdSec Bouncer \(Block\)
## Ansible Deployment
### Set LAPI (fw/roles/tasks/set_crowdsec_lapi.yaml)
- Deploy fw's config.yaml
- Deploy crowdsec certificates
- Register machines \(Agents\)
- Register bouncers \(Bouncers\)
### Set Bouncer (fw/roles/tasks/set_crowdsec_bouncer.yaml)
- Deploy crowdsec-firewall-bouncer.yaml
- Install suricata collection \(parser\) with cscli
- Set acquis.d for suricata
- set-only: bouncer can't get metrics from the chain and rules count result which it doesn't make. - It means, it is impossible to use prometheus metric with set-only true option.
- chain or rules matched count reasults are able to check on nftables.
- use sudo nft list chain inet filter global to check packet blocked. \(counter command is required\)
### Set Machines; agents (common/tasks/set_crowdsec_agent.yaml)
- Deploy config.yaml except fw \(disable LAPI, online_api_credentials\)
- Deploy local_api_credentials.yaml
### Set caddy host (auth/tasks/set_caddy.yaml)
- Set caddy CrowdSec module
- Set caddy log directory
- Install caddy collection \(parser\) with cscli
- Set acquis.d for caddy
### Set whitelist (/etc/crowdsec/parser/s02-enrich/whitelists.yaml)
- Set only local console IP address
- This can block local VM to the other subnet, but the communication between vms is possible because they are in the same subnet\(L2\) - packets don't pass the fw.
- Crowdsec bouncer only conducts blocks forward chain which pass Firewall, it is blocked by crowdsec bouncer based on lapi
## Test
### Decision test
> Set test decisions and check it
fw@fw:/etc/crowdsec/bouncers$ sudo cscli decisions add --ip 5.5.5.5 --duration 10m --reason "Test"
INFO[12-01-2026 01:50:40] Decision successfully added
fw@fw:/etc/crowdsec/bouncers$ sudo tail -f /var/log/crowdsec-firewall-bouncer.log
time="12-01-2026 01:50:22" level=info msg="backend type : nftables"
time="12-01-2026 01:50:22" level=info msg="nftables initiated"
time="12-01-2026 01:50:22" level=info msg="Using API key auth"
time="12-01-2026 01:50:22" level=info msg="Processing new and deleted decisions . . ."
time="12-01-2026 01:50:22" level=info msg="Serving metrics at 127.0.0.1:60601/metrics"
time="12-01-2026 01:50:22" level=info msg="1320 decisions deleted"
time="12-01-2026 01:50:22" level=info msg="15810 decisions added"
time="12-01-2026 01:50:42" level=info msg="1 decision added"
fw@fw:/etc/crowdsec/bouncers$ sudo nft list ruleset | grep -i 5.5.5.5
5.5.5.5 timeout 9m54s876ms expires 9m22s296ms,
### Parser test
> CrowdSec "crowdsecurity/suricata-evelogs" only parses "event_type: alert". You can test with cscli explain
fw@fw:~$ sudo cscli explain --file /tmp/suri_test.log --type suricata-evelogs --verbose
line: {"timestamp":"2026-01-11T14:43:52.153576+0000","flow_id":972844861874490,"in_iface":"wan","event_type":"alert","src_ip":"197.242.151.53","src_port":42976,"dest_ip":"59.5.196.55","dest_port":38694,"proto":"TCP","flow":{"pkts_toserver":1,"pkts_toclient":0,"bytes_toserver":60,"bytes_toclient":0,"start":"2026-01-11T14:42:51.554188+0000","end":"2026-01-11T14:42:51.554188+0000","age":0,"state":"new","reason":"timeout","alerted":false},"community_id":"1:Ovyuzq7R8yA3YfxM8jEExR5BZMI=","tcp":{"tcp_flags":"02","tcp_flags_ts":"02","tcp_flags_tc":"00","syn":true,"state":"syn_sent","ts_max_regions":1,"tc_max_regions":1}}
├ s00-raw
| ├ 🟢 crowdsecurity/non-syslog (first_parser)
| └ 🔴 crowdsecurity/syslog-logs
├ s01-parse
| ├ 🔴 crowdsecurity/apache2-logs
| ├ 🔴 crowdsecurity/nginx-logs
| ├ 🔴 crowdsecurity/sshd-logs
| ├ 🟢 crowdsecurity/suricata-evelogs (+9 ~2)
| ├ update evt.Stage : s01-parse -> s02-enrich
| ├ create evt.Parsed.dest_ip : 59.5.196.55
| ├ create evt.Parsed.dest_port : 38694
| ├ create evt.Parsed.proto : TCP
| ├ create evt.Parsed.time : 2026-01-11T14:43:52.153576
| ├ update evt.StrTime : -> 2026-01-11T14:43:52.153576Z
| ├ create evt.Meta.log_type : suricata_alert
| ├ create evt.Meta.service : suricata
| ├ create evt.Meta.source_ip : 197.242.151.53
| ├ create evt.Meta.sub_log_type : suricata_alert_eve_json
| ├ create evt.Meta.suricata_flow_id : 972844861874490
| └ 🔴 crowdsecurity/suricata-fastlogs
├ s02-enrich
| ├ 🟢 crowdsecurity/dateparse-enrich (+2 ~1)
| ├ create evt.Enriched.MarshaledTime : 2026-01-11T14:43:52.153576Z
| ├ update evt.MarshaledTime : -> 2026-01-11T14:43:52.153576Z
| ├ create evt.Meta.timestamp : 2026-01-11T14:43:52.153576Z
| ├ 🟢 crowdsecurity/geoip-enrich (+13)
| ├ create evt.Enriched.IsInEU : false
| ├ create evt.Enriched.IsoCode : ZA
| ├ create evt.Enriched.ASNumber : 37611
| ├ create evt.Enriched.Latitude : -28.998400
| ├ create evt.Enriched.Longitude : 23.988800
| ├ create evt.Enriched.SourceRange : 197.242.144.0/20
| ├ create evt.Enriched.ASNNumber : 37611
| ├ create evt.Enriched.ASNOrg : Afrihost
| ├ create evt.Meta.ASNNumber : 37611
| ├ create evt.Meta.IsInEU : false
| ├ create evt.Meta.SourceRange : 197.242.144.0/20
| ├ create evt.Meta.ASNOrg : Afrihost
| ├ create evt.Meta.IsoCode : ZA
| ├ 🔴 crowdsecurity/http-logs
| └ 🟢 crowdsecurity/whitelists (unchanged)
├-------- parser success 🟢
├ Scenarios
#### Caddy
auth@auth:~/containers/authelia/config$ sudo cscli explain --file /var/log/caddy/access.log --type caddy
line: {"level":"info","ts":1771601235.7503738,"logger":"http.log.access.log1","msg":"handled request","request":{"remote_ip":"192.168.99.20","remote_port":"59900","client_ip":"192.168.99.20","proto":"HTTP/2.0","method":"GET","host":"authelia.ilnmors.com","uri":"/static/js/components.TimerIcon.CO1b_Yfm.js","headers":{"Accept-Encoding":["gzip, deflate, br, zstd"],"Referer":["https://authelia.ilnmors.com/settings"],"Te":["trailers"],"Accept":["*/*"],"Sec-Fetch-Dest":["script"],"Priority":["u=1"],"Sec-Fetch-Mode":["cors"],"Accept-Language":["en-US,en;q=0.9"],"Cookie":["REDACTED"],"Sec-Fetch-Site":["same-origin"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:147.0) Gecko/20100101 Firefox/147.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"authelia.ilnmors.com"}},"bytes_read":0,"user_id":"","duration":0.0077169,"size":10193,"status":200,"resp_headers":{"Via":["1.1 Caddy"],"Alt-Svc":["h3=\":443\"; ma=2592000"],"X-Content-Type-Options":["nosniff"],"Content-Security-Policy":["default-src 'none'"],"Date":["Fri, 20 Feb 2026 15:27:15 GMT"],"Etag":["7850315714d1e01e73f4879aa3cb7465b4e879dc"],"Cache-Control":["public, max-age=0, must-revalidate"],"Content-Length":["10193"],"X-Frame-Options":["DENY"],"Content-Type":["text/javascript; charset=utf-8"],"Referrer-Policy":["strict-origin-when-cross-origin"],"Permissions-Policy":["accelerometer=(), autoplay=(), camera=(), display-capture=(), geolocation=(), gyroscope=(), keyboard-map=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), screen-wake-lock=(), sync-xhr=(), xr-spatial-tracking=(), interest-cohort=()"],"X-Dns-Prefetch-Control":["off"]}}
├ s00-raw
| ├ 🟢 crowdsecurity/non-syslog (first_parser)
| └ 🔴 crowdsecurity/syslog-logs
├ s01-parse
| ├ 🔴 crowdsecurity/apache2-logs
| └ 🟢 crowdsecurity/caddy-logs (+19 ~2)
├ s02-enrich
| ├ 🟢 crowdsecurity/dateparse-enrich (+2 ~1)
| ├ 🟢 crowdsecurity/http-logs (+7)
| └ 🟢 crowdsecurity/whitelists (~2 [whitelisted])
└-------- parser failure 🔴
## BAN logs case
### LAPI metrics
fw@fw:~$ sudo cscli metrics
Acquisition Metrics:
╭─────────────────────────────────────────────────┬────────────┬──────────────┬────────────────┬────────────────────────╮
│ Source │ Lines read │ Lines parsed │ Lines unparsed │ Lines poured to bucket │
├─────────────────────────────────────────────────┼────────────┼──────────────┼────────────────┼────────────────────────┤
│ file:/var/log/suricata/eve.json │ 130.25k │ - │ 130.25k │ - │
│ journalctl:journalctl-_SYSTEMD_UNIT=ssh.service │ 6 │ - │ 6 │ - │
╰─────────────────────────────────────────────────┴────────────┴──────────────┴────────────────┴────────────────────────╯
Parser Metrics:
╭─────────────────────────────────┬─────────┬─────────┬──────────╮
│ Parsers │ Hits │ Parsed │ Unparsed │
├─────────────────────────────────┼─────────┼─────────┼──────────┤
│ child-crowdsecurity/sshd-logs │ 60 │ - │ 60 │
│ child-crowdsecurity/syslog-logs │ 6 │ 6 │ - │
│ crowdsecurity/non-syslog │ 130.25k │ 130.25k │ - │
│ crowdsecurity/sshd-logs │ 6 │ - │ 6 │
│ crowdsecurity/syslog-logs │ 6 │ 6 │ - │
╰─────────────────────────────────┴─────────┴─────────┴──────────╯
Local Api Metrics:
╭──────────────────────┬────────┬───────╮
│ Route │ Method │ Hits │
├──────────────────────┼────────┼───────┤
│ /v1/alerts │ GET │ 1 │
│ /v1/alerts │ POST │ 6 │
│ /v1/decisions/stream │ GET │ 11337 │
│ /v1/heartbeat │ GET │ 8053 │
│ /v1/watchers/login │ POST │ 145 │
╰──────────────────────┴────────┴───────╯
Local Api Machines Metrics:
╭─────────┬───────────────┬────────┬──────╮
│ Machine │ Route │ Method │ Hits │
├─────────┼───────────────┼────────┼──────┤
│ app │ /v1/heartbeat │ GET │ 1587 │
│ auth │ /v1/alerts │ GET │ 1 │
│ auth │ /v1/alerts │ POST │ 6 │
│ auth │ /v1/heartbeat │ GET │ 1605 │
│ fw │ /v1/heartbeat │ GET │ 1621 │
│ infra │ /v1/heartbeat │ GET │ 1620 │
│ vmm │ /v1/heartbeat │ GET │ 1620 │
╰─────────┴───────────────┴────────┴──────╯
Local Api Bouncers Metrics:
╭───────────────┬──────────────────────┬────────┬──────╮
│ Bouncer │ Route │ Method │ Hits │
├───────────────┼──────────────────────┼────────┼──────┤
│ caddy-bouncer │ /v1/decisions/stream │ GET │ 1608 │
│ fw-bouncer │ /v1/decisions/stream │ GET │ 9729 │
╰───────────────┴──────────────────────┴────────┴──────╯
Local Api Decisions:
╭─────────────────┬────────┬────────┬───────╮
│ Reason │ Origin │ Action │ Count │
├─────────────────┼────────┼────────┼───────┤
│ http:exploit │ CAPI │ ban │ 17803 │
│ http:scan │ CAPI │ ban │ 4583 │
│ ssh:bruteforce │ CAPI │ ban │ 2509 │
│ http:bruteforce │ CAPI │ ban │ 1721 │
│ http:crawl │ CAPI │ ban │ 87 │
│ http:dos │ CAPI │ ban │ 15 │
╰─────────────────┴────────┴────────┴───────╯
Local Api Alerts:
╭───────────────────────────────────┬───────╮
│ Reason │ Count │
├───────────────────────────────────┼───────┤
│ crowdsecurity/http-bad-user-agent │ 2 │
│ crowdsecurity/jira_cve-2021-26086 │ 4 │
╰───────────────────────────────────┴───────╯
### WAF parser alerts
auth@auth:~$ sudo cscli alerts list
╭────┬────────────────────┬───────────────────────────────────┬─────────┬────┬───────────┬─────────────────────────────────────────╮
│ ID │ value │ reason │ country │ as │ decisions │ created_at │
├────┼────────────────────┼───────────────────────────────────┼─────────┼────┼───────────┼─────────────────────────────────────────┤
│ 25 │ Ip:206.168.34.127 │ crowdsecurity/http-bad-user-agent │ │ │ ban:1 │ 2026-03-07 02:26:58.074029091 +0000 UTC │
│ 23 │ Ip:162.142.125.212 │ crowdsecurity/http-bad-user-agent │ │ │ ban:1 │ 2026-03-07 00:19:08.421713824 +0000 UTC │
│ 12 │ Ip:159.65.144.72 │ crowdsecurity/jira_cve-2021-26086 │ │ │ ban:1 │ 2026-03-06 04:19:04.975124762 +0000 UTC │
│ 11 │ Ip:206.189.95.232 │ crowdsecurity/jira_cve-2021-26086 │ │ │ ban:1 │ 2026-03-06 04:19:01.215582087 +0000 UTC │
│ 10 │ Ip:68.183.9.16 │ crowdsecurity/jira_cve-2021-26086 │ │ │ ban:1 │ 2026-03-06 04:18:22.120468981 +0000 UTC │
│ 9 │ Ip:138.68.144.227 │ crowdsecurity/jira_cve-2021-26086 │ │ │ ban:1 │ 2026-03-06 04:18:18.35776077 +0000 UTC │
╰────┴────────────────────┴───────────────────────────────────┴─────────┴────┴───────────┴─────────────────────────────────────────╯

View File

@@ -0,0 +1,14 @@
# Kopia
Kopia is one of modern backup solution to support very strong deduplication.
## Repository
Kopia saves all information, even the users and policies on repository. Repository itself is complete. Repository is encrypted by master password.
## User and policy
When kopia is run as a kopia server, client can access to server with user and user password. The clients don't have to know master password. Kopia server decrypt the repository with the master password, and the client just access to the kopia server with their user account.
Repository \<- Master password -\> Kopia server \<- User password -\> Kopia client

View File

29
docs/services/fw/kea.md Normal file
View File

@@ -0,0 +1,29 @@
# IP
## IPv4
### Subnet management
- Static subnet \(manage without dhcp\)
- client \(for ipv4, set reservation\)
- server
- Dynamic subnet \(manage with dhcp\)
- user
## IPv6
### Subnet management
- Static subnet \(manage without RA - specific defination\)
- client \(Designated ULA with NAT66\)
- server \(Designated ULA with NAT66\)
- Dynamic subnet \(manage with RA and SLAAC\)
- user \(Autogenerated GUA\)
## Firewall policy for each subnet
### Static subnet
Make polices based on each specific designated IP address for nodes.
### Dynamic subnet
Make polices based on subnet \(or interface itself\)

146
docs/services/infra/ca.md Normal file
View File

@@ -0,0 +1,146 @@
## Operation
Refer to Ansible playbook
## Configuration files
- ca.json
- defaults.json
### Provisioner
Provisioner is basically the object of issuing certificates as a RA. They verify CSR from client and when it is valid with its policy they will sign the certificates with CA's private key. Step-CA supports various type of provisioner. In this homelab, only ACME will be used. Because infrastructure's certificates is issued manually. Step-CA supports one root CA and one intermediate CA in one container, only one intermediate CA will be operated in this project.
#### jwk-ca@ilnmors.internal
This provisioner is to issue intermediate CA. It wouldn't be used in this project. The option for CA in X.509 format is optional and defined in as extension option. To define these option in step-ca, the template file is needed.
- file: ~/data/containers/step-ca/templates/ca.tpl
```json
{
"subject": {{ toJson .Subject }},
"keyUsage": ["certSign", "crlSign"],
"basicConstraints": {
"isCA": true,
"maxPathLen": 0
}
}
```
> keyUsage: Designate to manage certificates and CRL
> isCA: Designate the certificate to use CA
> maxPathLen: Designate allowed below CA's number
- Define provisioner
```bash
podman exec -it step-ca \
step ca provisioner add jwk-ca@ilnmors.internal \
--create \ # Generate key pair automatically
--type JWK \
--ca-config /home/step/config/ca.json \ # Sign on certificate with root CA's private key
--x509-template /home/step/template/ca.tpl \ # Use x509 template
--x509-max-dur 87600h \ #
--x509-default-dur 87600h
```
#### jwk@ilnmors.internal
This provisioner is to issue the certificates like DB communication based on its identity (Using JWK and JWT pre-shared). The certificate is issued based on enrolled key in provisioner. However, in this project all crt will be used central ACME client `Caddy`.
- Define provisioner
```bash
podman exec -it step-ca \
step ca provisioner add jwk@ilnmors.internal \
--create \ # Generate key pair automatically
--type JWK \
--x509-default-dur 2160h # To set default expire date as 90 days.
```
#### acme@ilnmors.internal
This provisioner is to issue the certificates for https communication. The certificate is issued based on challenge; the ownership of domain.
- Define provisioner
```bash
podman exec -it step-ca \
step ca provisioner add acme@ilnmors.internal \
--type ACME \
--x509-default-dur 2160h # To set default expire date as 90 days.
```
### Subject
Step-CA uses subject as a account. It is used to manage Step-CA remotely. To use this, it is necessary to use `--remote-management` option when the step-CA is initially set or fix `ca.json` authority.enableAdmin:true. When subject is enabled, provisioners aren't defined in ca.json but its own DB.
### Policy
Self-hosted Step-CA server doesn't support to give x509 policy for each provisioner. It only allows public policy. Only `ilnmors.internal` and `*.ilnmors.internal` certificates are required, so designate the policy in `ca.json`
> Policies can be administered using the step CLI application. The commands are part of the step ca policy namespace. In a self-hosted step-ca, policies can be configured on the authority level. Source: [here](https://smallstep.com/docs/step-ca/policies/)
- file: ~/data/containers/step-ca/config/ca.json
```json
...
"authority": {
"policy": {
"x509": {
"allow": {
"dns": [
"ilnmors.internal",
"*.ilnmors.internal"
]
},
"allowWildcardNames": true
}
},
"provisioners": [ ... ]
....
}
...
```
## Verify server
### Server health check
```bash
curl -k https://ca.ilnmors.internal:9000/health
> {"status":"ok"}
```
### Server policy check
```bash
podman exec -it ca step ca certificate test.com test.crt test_key --provisioner acme@ilnmors.internal
> error creating new ACME order: The server will not issue certificates for the identifier
```
---
## Set trust Root CRT
### Linux
#### Debian/ubuntu
- File: /usr/local/share/ca-certificates/{ca.crt, ca.pem}
- `update-ca-certificates`
#### Cent/RHEL/Fedora
- File: /etc/pki/ca-trust/source/anchors/{ca.crt, ca.pem}
- `update-ca-trust`
### Windows
- `Windows + R` + `certlm.msc`
- `All Task` - `Import`
### Firefox
- Setting - Security - view certificates - Authority - add
- \[x\] trust this ca to identify website
- \[x\] trust this ca to identify email users

View File

@@ -0,0 +1,20 @@
# Grafana
## Operation
Refer to Ansible playbook
\(Postgresql user and DB is needed\)
\(LDAP strict readonly account is needed\)
## Verification
- Check Caddyfile \(without caddy, use 3000 ports\)
- https://grafana.ilnmors.internal
- login with LDAP user
- connection:data sources: \[prometheus|loki\]: provisioned
- https://prometheus.ilnmors.internal:9090
- https://loki.ilnmors.internal:3100
- check drill down:metrics
## Dashboard
- Dashboard isn't saved on local directory. They are saved on DB \(Postgresql\).

154
docs/services/infra/ldap.md Normal file
View File

@@ -0,0 +1,154 @@
## Operation
Refer to Ansible playbook
\(Postgresql user and DB is needed\)
Integrate configuration with various app: https://github.com/lldap/lldap/blob/main/example_configs
## Configuration
### DB URL
Jinja2 `urlencode` module doesn't replace `/` as `%2F`. replace('/', '%2F') is necessary.
ex\) {{ var | urlencode | replace('/', '%2F') }}
### Reset administrator password
```bash
# infra
sudo nano $LDAP_PATH/data/lldap_config.toml
# Add below on file
ldap_user_pass = "REPLACE_WITH_PASSWORD"
force_ldap_user_pass_reset = true
# Restart lldap
systemctl --user restart ldap.service
# Delete added lines from lldap_config.toml
# ldap_user_pass = "REPLACE_WITH_PASSWORD"
# *YOU MUST DELETE PASSWORD PART*
# force_ldap_user_pass_reset = true
```
### Access web UI and Login
- URL: http://ldap.ilnmors.internal:17170 \(This is temporary access way before Caddy, which is reverse proxy, is set)
- ID: admin
- PW: $LLDAP_LDAP_USER_PASSWORD
### Create the groups
- Groups - \[\+\] Create a group
- Group: admins
- Group: users
It is necessary to manage ACL via authelia based on groups.
### Create the authelia user for OCID \(OP\)
- Users: \[\+\] Create a user
- Username (cn; uid): authelia
- Display name: Authelia
- First Name: Authelia
- Last Name (sn): Service
- Email (mail): authelia@ilnmors.internal
- Password: "$(openssl rand -base64 32)"
- Groups:lldap_strict_readonly: \[Add to group\]
- This group allow search authority.
- Users: \[\+\] Create a user
- Username (cn; uid): grafana
- Display name: Grafana
- First Name: Grafana
- Last Name (sn): Service
- Email (mail): grafana@ilnmors.internal
- Password: "$(openssl rand -base64 32)"
- Groups:lldap_strict_readonly: \[Add to group\]
- This group allow search authority.
> Save the password in .secret.yaml
### Create the normal users
- Users: \[\+\] Create a user
- Username (cn; uid): il
- First Name: Il
- Last Name (sn): Lee
- Email (mail): il@ilnmors.internal
- Password: "$PASSWORD"
- Groups:lldap_admin&admins&users: \[Add to group\]
- Users: \[\+\] Create a user
- Username (cn; uid): user
- First Name: John
- Last Name (sn): Doe
- Email (mail): john_doe@ilnmors.internal
- Password: "$PASSWORD"
- Groups:(admins|users): \[Add to group\]
> Custom schema in `User schema`, `Group schema` doesn't need to be added. This is for advanced function to add additional value such as `identity number` or `phone number`. Hardcoded schema, which means basic schema the lldap provides is enough to use Authelia.
> After all these steps, now you can integrate the Authelia for SSO.
## Usage of LDAP
### Service Bind
LDAP call `login` as Bind. When the authelia Bind to the LDAP server, it can get the authority to search in `lldap_strict_readonly` group.
### Search
authelia account has the authority to search, it can search to send the query.
#### Flow of search
- Client (authelia) sends the query
- `uid=user in dc=ilnmors,dc=internal`
- LDAP server searches the DN of entry
- `uid=user,ou=people,dc=ilnmors,dc=internal`
- LDAP sends the DN to Client (authelia)
## Authelia's work flow
### First login
#### User login query
User try to login on login page of Authelia.
- id: user
- password: 1234
#### Service Bind (Bind and search)
authelia binds to LLDAP server based on the information in configuration.yml.
- dn: authelia
- password: authelia's password
#### Search
authelia sends the query to LLDAP after bind.
- `uid=user in dc=ilnmors,dc=internal`
#### Request
LLDAP server searches the entry and send the DN information query to authelia.
- `uid=user,ou=people,dc=ilnmors,dc=internal`
### Verify the user login (Second login)
#### User Bind (Bind only)
authelia tries to bind LLDAP server based on the information that user input.
- dn: requested uid
- password: 1234
#### Verification from LLDAP
LLDAP verify the password from authelia with its hash value saved in LLDAP's database.
#### Request
LLDAP server sends the result as `Success` or `Fail`.
> Search authority is basic authority of user who binds to LDAP server. It is just the way to check success or fail bind is the charge of Authelia.
## verify
- openssl s_client -connect ldap.ilnmors.internal:636 -tls1_3

View File

@@ -0,0 +1,12 @@
# loki
## Operation
Refer to Ansible playbook
## Verification
- fw@fw:/var/lib/bind$ curl -k https://loki.ilnmors.internal:3100/ready \(Node which is in NET_SERVER except infra itself\)
- ready
- fw@fw:/var/lib/bind$ curl -k https://loki.ilnmors.internal:3100/metrics
- metrics lists
- fw@fw:/var/lib/bind$ curl -k https://loki.ilnmors.internal:3100/loki/api/v1/labels
- no org id
- JSON format labels when alloy is set

View File

@@ -0,0 +1,64 @@
# Postgresql
## Operation
Refer to Ansible playbook
## File management
```bash
# console
## cluster
scp infra@infra:$POSTGRESQL_BACKUP_PATH/pg_cluster.sql $HOMELAB_PATH/data/backups/infra/postgresql/pg_cluster.sql
## data
scp infra@infra:$POSTGRESQL_BACKUP_PATH/pg_backup.sql $HOMELAB_PATH/data/backups/infra/postgresql/pg_backup.sql
## The data is managed by kopia.
```
## Verification
```bash
# ... Start postgresql service
# Create user and database
podman exec -it -u postgres postgresql "psql -U postgres"
> CREATE USER service WITH PASSWORD 'abc';
> CREATE DATABASE service_db;
> ALTER DATABASE service_db OWNER TO service;
> \du
> \l
> \q
# Reset database
> SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'service_db'; # connection reset
> DROP DATABASE service_db;
> CREATE DATABASE service_db;
> ALTER DATABASE service_db OWNER TO service;
> \du
> \l
> \q
# Restor database (manually)
podman exec -u postgres postgresql "psql -U postgres -f $POSTGRESQL_BACKUP_PATH_IN_CONTAINER/script.sql"
# Backup service executes
systemctl --user start postgresql-cluster-backup.service
# Stop and remove all data
systemctl --stop postgresql
sudo find "/home/infra/data/containers/postgresql/data" -mindepth 1 -delete
# Restore database
# Just locate sql files on data_path, and use playbooks
# Check restoring
podman exec -it -u postgres postgresql psql -U postgres
> \du
> \l
# Check extension
postgres=# SHOW shared_preload_libraries;
shared_preload_libraries
--------------------------
vchord.so
(1 row)
```

View File

@@ -0,0 +1,12 @@
# Prometheus
## Operation
Refer to Ansible playbook
## Verification
- Check Caddyfile \(without caddy, use 9090 ports\)
- https://prometheus.ilnmors.internal
- Status:Target Health
- Check `Endpoint localhost:9090 ` with green circle
- Status:command-line flag
- Check `--web.enable-remote-write-receiver: true`

View File

@@ -0,0 +1,35 @@
# systemd-networkd
- Use `networkctl` and the files in `/etc/systemd/network`
- link file
Link file links hardware interface and kernel while booting
- netdev file
netdev file defines virtual interface \(port, bridge\)
- network file
network file defines network option above interfaces
## commands
- reload
- networkctl reload
- networkctl reconfigure \[interface name\]
## references
- https://manpages.debian.org/testing/systemd/systemd/networkctl.1.en.html
- https://manpages.debian.org/testing/systemd/systemd.link.5.en.html
- https://manpages.debian.org/testing/systemd/systemd.network.5.en.html
- https://manpages.debian.org/testing/systemd/systemd.netdev.5.en.html
## Plans
- Hypervisor's linux bridges work as L2 switch
- br0 is completely L2 switch \(LinkLocalAddressing=no\)
- br1 has ip address for hypervisor itself, but basically works as L2 switch whitch can deal with VLAN tags; id=1,10
- Firewall's port \(wan\) works as Gateway which can conduct NAT
- Firewall's port \(clients\) works as trunk port which can deal with VLAN tags; id=1,10,20
- Firewall's port
- client, id = 1
- server, id = 10
- user, id = 20
- wg0

View File

@@ -0,0 +1,67 @@
# systemd-quadlet
Quadlet is for defining container configuration and lifecycle combining systemd and podman.
## Rootless container
Containers should be isolated from host OS. However, docker runs with root permission on daemon \(dockerd\). This means when one docker container has vulnerability and it is taken over, all the host system authority is threatened. Rootless container, podman runs without root permission and daemon so that even if one of containers is taken over, prevent the damage in host's normal user authority.
Rootless container maps UID/GID between host and its own following namespace. Host's user UID/GID is mapped with container's root, and host's subuid/subgid defined on `/etc/subuid`, `/etc/subgid` is mapped with container's user UID/GID by default.
- Default `/etc/subuid` and `/etc/subgid`
- user:100000:65536
- host user 1000 > container root 0
- host subuid 100999 > containers 1000
Rootless services originally depends on session. It is necessary to set `linger` to guarantee the service health regardless the session.
- sudo loginctl enable-linger user
- ls /var/lib/systemd/linger/user
## Quadlet
Quadlet defines specification of container in `.container` file and generates `.service` automatically for systemd. systemd can manage the container like its own service with `systemctl` command.
```ini
# $HOME/.config/containers/systemd/a.container
[Quadlet]
# Don't make a dependencies
DefaultDependencies=false
[Unit]
Description=app
After=network-online.target
Wants=network-online.target
BindsTo=a.service
Requires=a.service
[Service]
ExecStartPre=/bin/sh -c 'echo "Waiting for infra-postgresql..."; until nc -z postgresql.ilnmors.internal 5432; do sleep 1; done;'
[Container]
Image=localhost/app:1.0.0
ContainerName=app
PublishPort=2080:80/tcp
PublishPort=2443:443/tcp
AddHost=app.service.internal:host-gateway
Volume=%h/data/containers/app:/home/app:rw
Environment="ENV1=ENV1"
Secret=ENV_NAME,type=env
Secret=app.file,target=/path/of/secret/file/name
# podman run [options] [image] example --config exconfig
Exec=example --config exconfig
# If you want to change Entrypoint itself, use
Entrypoint=sh -c 'command'
[Install]
# Guarantee auto start
WantedBy=default.target
```

View File

@@ -0,0 +1,125 @@
# cloud-init and seed.iso
## reference
- https://cloudinit.readthedocs.io/en/latest/reference/examples.html#yaml-examples
## packages
- cloud-image-utils
- genisoimage
## meta-data
- meta-data.yaml
```yaml
instance-id: test-vm-$DATE
local-hostname: test
```
## user-data
- user-data.yaml
```yaml
#cloud-config
# Command which is excuted when systemd boots
bootcmd:
- groupadd -g 2000 svadmins || true
hostname: test
# auto resize partition and filesystem depends on virtual disk image
growpart:
mode: auto
devices: ['/']
ignore_growroot_disabled: false
resize_rootfs: true
# prohibit root login
disable_root: true
users:
- name: test
gecos: test
primary_group: svadmins
groups: sudo
lock_passwd: false
passwd: $(openssl passwd -6 'password')
shell: /bin/bash
ssh_authorized_keys:
- 'ssh-ed25519 KEY_VALUE'
write_files:
# ip_forward option
- path: /etc/sysctl.d/ipforward.conf
content: |
net.ipv4.ip_forward=1
permissions: '0644'
# systemd-networkd files
- path: /etc/systemd/network/00-eth0.link
content: |
[Match]
MACAddress=0a:49:6e:4d:00:00
[Link]
Name=eth0
permissions: '0644'
# - path: /etc/systemd/network/files....
# ssh host files
- path: /etc/ssh/id_test_ssh_host
content: |
-----BEGIN OPENSSH PRIVATE KEY-----
-----END OPENSSH PRIVATE KEY-----
permissions: '0600'
- path: /etc/ssh/id_test_ssh_host.pub
content: |
ssh-ed25519 KEY_VALUE TEST_SSH_HOST
permissions: '0644'
- path: /etc/ssh/id_test_ssh_host-cert.pub
content: |
ssh-ed25519-cert-v01@openssh.com KEY_VALUE TEST_SSH_HOST
permissions: '0644'
# sshd_config
- path: /etc/ssh/sshd_config.d/cert.conf
content: |
HostKey /etc/ssh/id_test_ssh_host
HostCertificate /etc/ssh/id_test_ssh_host-cert.pub
permissions: '0644'
- path: /etc/ssh/sshd_config.d/permit_root_login.conf
content: |
PermitRootLogin no
permissions: '0644'
runcmd:
# systemd-networkd interface loading
- update-initramfs -u
- systemctl disable networking
- systemctl enable systemd-networkd
- systemctl enable getty@ttyS0
- sync
power_state:
delay: "now"
mode: reboot
message: "rebooting after cloud-init configuration"
timeout: 30
```
## network-config
- network-config.yaml
```yaml
version: 2
ethernets: {}
network:
config: disabled
```
## Create seed.iso
```bash
cloud-localds -N network-config test_seed.iso user-data.yaml meta-data
```

View File

@@ -0,0 +1,18 @@
# Undefine VM
Undefine VM is critical to whole systme.
## process
```bash
# Shutdown VM
systemctl --user stop "$VM_NAME".service
## virsh stop|destroy "$VM_NAME"
# Undefien VM
virsh undefine "$VM_NAME" --nvram # All vms use uefi, so the option, `--nvram` is needed to remove nvram file
# Delete VM files
sudo rm -r /var/lib/libvirt/images/"$VM_NAME".qcow2
sudo rm -r /var/lib/libvirt/seeds/"$VM_NAME"_seed.iso
```