Compare commits

...

41 Commits

Author SHA1 Message Date
il 02fa912cb1 feat(trilium): release trilium
deployment notes:
- oidc error (users cannot access at once, it needs login twice when using oidc
2026-05-09 22:38:57 +09:00
il aceef4bdaa refactor(authelia): update authelia.yaml.j2 to fix redirect_uris from hardcoded uris to ansible variables 2026-05-09 21:44:11 +09:00
il 64aad4fcf0 docs(all): fix markdown syntax and snippets 2026-05-09 20:54:32 +09:00
il 81244d55a7 feat(wiki.js): release wiki.js
deployment notes:
- use this as personal/family wiki system
- compare to affine / memos and triliumNext
2026-05-09 17:50:05 +09:00
il 1cfd024285 refactor(x509-exporter): update notification to restart x509-exporter when its config.yaml is changed 2026-05-09 17:42:35 +09:00
il 26115c5660 feat(redis): update redis from 8.6.1 to 8.6.3
update notes:
- run 'ansible-playbook playbooks/app/site.yaml --tags "site"' so that update all redis at onece
2026-05-09 13:55:28 +09:00
il acef35ca8b feat(postgresql): update postgresql and vectorchord extension
update notes:
- update postgresql version from 18.2 to 18.3
- update vectorchord version from 0.5.3 to 1.1.1
- add update flow and notice to postgresql.md
2026-05-09 13:54:10 +09:00
il b531170bd7 feat(vaultwardne): update vaultwarden from 1.35.8 to 1.36.0 2026-05-09 12:56:22 +09:00
il ad586c3cd3 feat(grafana): update grafana from 12.3.3 to 13.0.1 2026-05-09 12:50:36 +09:00
il 6dfef08f7b feat(prometheus): update prometheus from v3.9.1 to v3.11.3 2026-05-09 12:44:44 +09:00
il 934dd314a8 feat(x509-exporter): update x509-exporter from 3.21.0 to 4.1.0
update notes:
- '--listen-address' and '--watch-dir' cli flags are deprecated
- add '--config' cli flag and config.yaml
2026-05-09 12:44:05 +09:00
il 2529a918df feat(loki): update loki version from 3.6.5 to 3.7.1 2026-05-09 12:17:16 +09:00
il 7dfa20d3dd feat(ldap): update lldap version from 0.6.2 to 0.6.3 2026-05-09 11:56:19 +09:00
il 329620c7d7 feat(alloy): update alloy version from 1.13.0 to 1.16.1 2026-05-09 11:52:25 +09:00
il f820e89cf6 refactor(roles): update binary application installation flow
update notes:
- keep set_cli_tools responsible only for console CLI tools
- download and install kopia from the kopia role
- download and install blocky from the blocky role
- download and install alloy from the alloy role
- reduce console artifact staging for service binaries
2026-05-09 10:46:29 +09:00
il a05951f883 fix(crowdsec): optimize whitelist expressions
update notes:
- add http_status and http_verb for each expressions (actual budget, immich, opencloud)
- fix crowdsec and issues documents
2026-05-07 10:32:11 +09:00
il b404a9e459 fix(crowdsec): update whitelist.yaml to prevent false positive
false positive:
- nextcloud thumbnail/preview 404 problem (crowdsecurity/http-probing)
2026-05-07 10:27:34 +09:00
il 3b4b56f53f fix(nftables): update fw nftables to allow vpn connection regardless of crowdsec ban 2026-05-07 09:22:49 +09:00
il f697715065 feat(sure): release sure (we-promise/sure)
deployment notes:
- let's try three of budget apps, actual budget, ezbookkeeping, and sure
2026-05-06 18:52:31 +09:00
il be7f215380 feat(ezbookkeeping): release ezbookkeeping
deployment notes:
- use ezbookkeeping for budget
- compare to actual budget
- it has no RBAC and sharing budget, try to sure (we-promise/sure)
2026-05-06 15:56:19 +09:00
il 26e0fe4f8b docs(ADR): update ADR 007 - backup to add checking rule and flows 2026-05-06 14:30:25 +09:00
il 2bb1f015e0 fix(kopia): update the bound home path from %h to ansible variable
update note:
- hotfix
- backups haven't run since commit '9f236b6fa5'
- the root service unit's %h always indicates root's home path
- backup service is verified
2026-05-06 14:06:22 +09:00
il 0f546e13b3 fix(btrfs): update btrfs scrub path
update notes:
- from '{{ node['home_path'] }}/data' to '{{ storage['btrfs']['mount_point'] }}'
2026-05-06 10:33:57 +09:00
il ba8b312bf2 feat(btrfs): update btrfs scrub service and timer on app vm 2026-05-06 08:15:53 +09:00
il 6fcedd9162 feat(collabora): release collabora
deployment note:
- link to nextcloud
- document opening is verified (including korean fonts)
2026-05-05 21:20:31 +09:00
il 6ca4f61d50 docs(nextcloud): update security warning decisions and background job annotation
update notes:
- trusted_proxies warning
- HSTS option warning
- background job mode annotation
2026-05-05 20:09:00 +09:00
il 15c09cb899 docs(nextcloud): update how to disable auto generated contacts from nextcloud account 2026-05-03 12:05:11 +09:00
il 880857a70a fix(crowdsec): update parser 'crowdsecurity/nextcloud-whitelist'
update note:
- deprecate custom whitelist expression
- apply 'crowdsecurity/nextcloud-whitelist' parser
2026-05-03 07:19:59 +09:00
il 70bf539546 docs(issues): fix crowdsec whitelist regex to whitelist expressions 2026-05-02 20:40:10 +09:00
il 5dd38b7e49 fix(crowdsec): update whitelist.yaml to prevent false positive
false positive:
- chunk problems (crowdsecurity/http-crawl-non_statics)
- directory upload 404 problem (crowdsecurity/http-probing)
2026-05-02 20:38:48 +09:00
il 33d94211d1 docs(issues): fix crowdsec command 'cscli decision list' to 'cscli decision delete' 2026-05-02 19:46:51 +09:00
il 278dd3cebe feat(nextcloud): release nextcloud
deployment note:
- use nextcloud for groupware
- consider replacing vikunja and opencloud
2026-05-02 19:22:05 +09:00
il d1dcb1984a feat(vaultwarden): update vaultwarden version from 1.35.4 to 1.35.8 2026-04-30 10:03:33 +09:00
il 37c986177b feat(blocky): update blocky version from 0.28.2 to 0.29.0 2026-04-30 10:01:18 +09:00
il 17326b1b15 feat(step-ca): update step-ca version from 0.29.0 to 0.30.2
update note:
- step-ca container doesn't support $PWDPATH anymore
- add --password-file argument to exec
2026-04-30 09:56:22 +09:00
il 88e1383202 feat(x509-exporter): update x509-exporter version from 3.19.1 to 3.21.0 2026-04-30 09:19:42 +09:00
il c9b4707cb2 refactor(x509-exporter): change handler from enable to restart 2026-04-30 09:18:44 +09:00
il da9c610426 feat(caddy): update caddy version from 2.10.2 to 2.11.2
update note:
- https upstream Host rewrite is automated
- Caddyfile already defines Host rewrite explicitly
2026-04-30 09:09:40 +09:00
il c1a6da2aa8 feat(authelia): update authelia version from 4.39.15 to 4.39.19 2026-04-30 09:07:16 +09:00
il f1cd8c9a60 feat(gitea): update gitea version from 1.25.5 to 1.26.1
deployment note:
- stop gitea container
- create manual database backup
- update gitea
2026-04-30 08:28:51 +09:00
il 6010230a14 feat(paperless): update paperless version from 2.20.13 to 2.20.15 2026-04-30 08:10:50 +09:00
100 changed files with 2241 additions and 339 deletions
+5 -5
View File
@@ -1,6 +1,6 @@
# ilnmors homelab README
This homelab project implements single-node On-premise IaaS system. The homelab contains virtual machines which are divided by their roles, such as private firewall, DNS, PKI, LDAP and database, SSO\(OIDC\). The standard domain is used to implement this system without specific vendors. All components are defined as code and initiated by IaC \(Ansible\) except hypervisor initial configuration.
This homelab project implements single-node On-premise IaaS system. The homelab contains virtual machines which are divided by their roles, such as private firewall, DNS, PKI, LDAP and database, SSO(OIDC). The standard domain is used to implement this system without specific vendors. All components are defined as code and initiated by IaC (Ansible) except hypervisor initial configuration.
## RTO times
- Feb/25/2026 - Reprovisioning Hypervisor and vms
@@ -15,12 +15,12 @@ This homelab project implements single-node On-premise IaaS system. The homelab
- Mar/5/2026 - Reprovisioning Hardware and Hypervisor and vms
- RTO: 2 hour 20 min
- console: 15min - verified
- certificate: 0 min \(When it needs to be created, RTO will be 20 min) - not verified
- wireguard: 0 min \(When it needs to be created, RTO will be 1 min) - not verified
- hypervisor\(+fw\): 45 min - verified
- certificate: 0 min (When it needs to be created, RTO will be 20 min) - not verified
- wireguard: 0 min (When it needs to be created, RTO will be 1 min) - not verified
- hypervisor(+fw): 45 min - verified
- switch: 1 min - verified
- dsm: 30 min - verified
- kopia: 0 min \(When it needs to be created, RTO will be 10 min) - verified
- kopia: 0 min (When it needs to be created, RTO will be 10 min) - verified
- Extra vms: 30 min - verified
- Etc: 30 min
+71 -21
View File
@@ -66,7 +66,7 @@ services:
grafana:
domain: "grafana"
ports:
http: "3000"
http: "3000" # Infra server: Internal ports
subuid: "100471"
caddy:
ports:
@@ -97,7 +97,7 @@ services:
public: "gitea"
internal: "gitea.app"
ports:
http: "3000"
http: "3000" # App server: Public ports
subuid: "100999"
immich:
domain:
@@ -111,8 +111,8 @@ services:
http: "3003"
actualbudget:
domain:
public: "budget"
internal: "budget.app"
public: "actualbudget"
internal: "actualbudget.app"
ports:
http: "5006"
subuid: "101000"
@@ -148,39 +148,89 @@ services:
http: "3010"
redis: "6381"
manticore: "9308"
nextcloud:
domain:
public: "nextcloud"
internal: "nextcloud.app"
ports:
http: "8002"
redis: "6382"
subuid: "100032"
collabora:
domain:
public: "collabora"
internal: "collabora.app"
ports:
http: "9980"
subuid: "101000"
ezbookkeeping:
domain:
public: "budget"
internal: "budget.app"
ports:
http: "8003"
subuid: "100999"
sure:
domain:
public: "sure"
internal: "sure.app"
ports:
http: "3001"
redis: "6383"
subuid: "100999"
wikijs:
domain:
public: "wiki"
internal: "wiki.app"
ports:
http: "3002"
subuid: "100999"
trilium:
domain:
public: "notes"
internal: "notes.app"
ports:
http: "8004"
subuid: "100999"
version:
packages:
sops: "3.12.1"
step: "0.29.0"
step: "0.30.2"
kopia: "0.22.3"
blocky: "0.28.2"
alloy: "1.13.0"
blocky: "0.29.0"
alloy: "1.16.1"
containers:
# common
caddy: "2.10.2"
caddy: "2.11.2"
# infra
step: "0.29.0"
ldap: "v0.6.2"
x509-exporter: "3.19.1"
prometheus: "v3.9.1"
loki: "3.6.5"
grafana: "12.3.3"
step: "0.30.2"
ldap: "v0.6.3"
x509-exporter: "4.1.0"
prometheus: "v3.11.3"
loki: "3.7.1"
grafana: "13.0.1"
## Postgresql
postgresql: "18.2"
postgresql: "18.3"
# For immich - https://github.com/immich-app/base-images/blob/main/postgres/versions.yaml
# pgvector: "v0.8.1"
vectorchord: "0.5.3"
vectorchord: "1.1.1"
# Auth
authelia: "4.39.15"
authelia: "4.39.19"
# App
vaultwarden: "1.35.4"
gitea: "1.25.5"
redis: "8.6.1"
vaultwarden: "1.36.0"
gitea: "1.26.1"
redis: "8.6.3"
immich: "v2.7.5"
actualbudget: "26.3.0"
paperless: "2.20.13"
paperless: "2.20.15"
vikunja: "2.2.2"
opencloud: "4.0.6"
manticore: "25.0.0"
affine: "0.26.3"
nextcloud: "33.0.3"
collabora: "25.04.9.4.1"
ezbookkeeping: "1.4.0"
sure: "0.7.0-hotfix.2"
wikijs: "2.5.314"
trilium: "v0.102.2"
+47
View File
@@ -225,6 +225,53 @@
tags: ["site", "affine"]
tags: ["site", "affine"]
- name: Set nextcloud
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_nextcloud"
apply:
tags: ["site", "nextcloud"]
tags: ["site", "nextcloud"]
- name: Set collabora
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_collabora"
apply:
tags: ["site", "collabora"]
tags: ["site", "collabora"]
- name: Set ezbookkeeping
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_ezbookkeeping"
apply:
tags: ["site", "ezbookkeeping"]
tags: ["site", "ezbookkeeping"]
- name: Set sure
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_sure"
apply:
tags: ["site", "sure"]
tags: ["site", "sure"]
- name: Set wiki.js
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_wikijs"
apply:
tags: ["site", "wikijs"]
tags: ["site", "wikijs"]
- name: Set trilium
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_trilium"
apply:
tags: ["site", "trilium"]
tags: ["site", "trilium"]
- name: Flush handlers right now
ansible.builtin.meta: "flush_handlers"
+8
View File
@@ -122,3 +122,11 @@
apply:
tags: ["init", "site", "tools"]
tags: ["init", "site", "tools"]
- name: Set kopia
ansible.builtin.include_role:
name: "common"
tasks_from: "services/set_kopia"
apply:
tags: ["init", "site", "kopia"]
tags: ["init", "site", "kopia"]
+70
View File
@@ -99,3 +99,73 @@
changed_when: false
listen: "notification_restart_affine"
ignore_errors: true # noqa: ignore-errors
- name: Restart nextcloud
ansible.builtin.systemd:
name: "nextcloud.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_nextcloud_init.stat.exists
changed_when: false
listen: "notification_restart_nextcloud"
ignore_errors: true # noqa: ignore-errors
- name: Restart collabora
ansible.builtin.systemd:
name: "collabora.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_collabora"
ignore_errors: true # noqa: ignore-errors
- name: Restart ezbookkeeping
ansible.builtin.systemd:
name: "ezbookkeeping.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_ezbookkeeping"
ignore_errors: true # noqa: ignore-errors
- name: Restart sure
ansible.builtin.systemd:
name: "{{ item }}"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
loop:
- "sure-web.service"
- "sure-worker.service"
changed_when: false
listen: "notification_restart_sure"
ignore_errors: true # noqa: ignore-errors
- name: Restart wikijs
ansible.builtin.systemd:
name: "wikijs.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_wikijs"
ignore_errors: true # noqa: ignore-errors
- name: Restart trilium
ansible.builtin.systemd:
name: "trilium.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_trilium"
ignore_errors: true # noqa: ignore-errors
@@ -68,3 +68,23 @@
group: "svadmins"
mode: "0770"
become: true
- name: Deploy btrfs scrub service and timer
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/app/btrfs/{{ item }}.j2"
dest: "/etc/systemd/system/{{ item }}"
owner: "root"
group: "root"
mode: "0644"
loop:
- "btrfs-scrub.service"
- "btrfs-scrub.timer"
become: true
- name: Enable auto btrfs scrub
ansible.builtin.systemd:
name: "btrfs-scrub.timer"
state: "started"
enabled: true
daemon_reload: true
become: true
@@ -0,0 +1,17 @@
---
- name: Deploy container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/collabora/collabora.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/collabora.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_collabora"
- name: Enable collabora.service
ansible.builtin.systemd:
name: "collabora.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -0,0 +1,58 @@
---
- name: Create ezbookkeeping directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['ezbookkeeping']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/ezbookkeeping"
- "data/containers/ezbookkeeping/data"
- "containers/ezbookkeeping"
- "containers/ezbookkeeping/ssl"
become: true
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/ezbookkeeping/ssl/{{ root_cert_filename }}"
owner: "{{ services['ezbookkeeping']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_ezbookkeeping"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "{{ item.name }}"
data: "{{ item.value }}"
state: "present"
force: true
loop:
- name: "EBK_AUTH_OAUTH2_CLIENT_SECRET"
value: "{{ hostvars['console']['ezbookkeeping']['oidc']['secret'] }}"
- name: "EBK_DATABASE_PASSWD"
value: "{{ hostvars['console']['postgresql']['password']['ezbookkeeping'] }}"
notify: "notification_restart_ezbookkeeping"
no_log: true
- name: Deploy ezbookkeeping.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/ezbookkeeping/ezbookkeeping.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/ezbookkeeping.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_ezbookkeeping"
- name: Enable ezbookkeeping.service
ansible.builtin.systemd:
name: "ezbookkeeping.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -0,0 +1,176 @@
---
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "nextcloud"
- name: Create redis_nextcloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "containers/redis"
- "containers/redis/{{ redis_service }}"
- "containers/redis/{{ redis_service }}/data"
become: true
- name: Deploy redis config file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.conf.j2"
dest: "{{ node['home_path'] }}/containers/redis/{{ redis_service }}/redis.conf"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/redis_{{ redis_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
name: "redis_{{ redis_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create nextcloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/nextcloud"
- "data/containers/nextcloud/html"
- "containers/nextcloud"
- "containers/nextcloud/ssl"
- "containers/nextcloud/ini"
become: true
- name: Check data directory empty
ansible.builtin.stat:
path: "{{ node['home_path'] }}/data/containers/nextcloud/.init"
register: "is_nextcloud_init"
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/nextcloud/ssl/{{ root_cert_filename }}"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_nextcloud"
no_log: true
- name: Initialize nextcloud
when: not is_nextcloud_init.stat.exists
block:
- name: Execute init command (Including pulling image)
containers.podman.podman_container:
name: "nextcloud_init"
image: "docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}"
command: "/bin/true"
state: "started"
rm: true
detach: false
env:
NEXTCLOUD_UPDATE: "1"
NEXTCLOUD_ADMIN_USER: "admin-local"
NEXTCLOUD_ADMIN_PASSWORD: "{{ hostvars['console']['nextcloud']['admin-local']['password'] }}"
POSTGRES_HOST: "{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}"
POSTGRES_DB: "nextcloud_db"
POSTGRES_USER: "nextcloud"
POSTGRES_PASSWORD: "{{ hostvars['console']['postgresql']['password']['nextcloud'] }}"
PGSSLMODE: "verify-full"
PGSSLROOTCERT: "/etc/ssl/nextcloud/{{ root_cert_filename }}"
PGSSLCERTMODE: "disable"
REDIS_HOST: "host.containers.internal"
REDIS_HOST_PORT: "{{ services['nextcloud']['ports']['redis'] }}"
volume:
- "{{ node['home_path'] }}/containers/nextcloud/ssl:/etc/ssl/nextcloud:ro"
- "{{ node['home_path'] }}/data/containers/nextcloud/html:/var/www/html:rw"
no_log: true
- name: Create .init file
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers/nextcloud/.init"
state: "touch"
mode: "0644"
owner: "{{ ansible_user }}"
group: "svadmins"
- name: Deploy config files
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/config/{{ item }}.j2"
dest: "{{ node['home_path'] }}/data/containers/nextcloud/html/config/{{ item }}"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0640"
loop:
- "background.config.php"
- "cache.config.php"
- "domain.config.php"
- "local_remote.config.php"
- "user_oidc.config.php"
become: true
notify: "notification_restart_nextcloud"
- name: Deploy opcache.ini file
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/ini/{{ item }}"
dest: "{{ node['home_path'] }}/containers/nextcloud/ini/{{ item }}"
group: "svadmins"
mode: "0644"
loop:
- "opcache.ini"
- "upload.ini"
notify: "notification_restart_nextcloud"
- name: Deploy nextcloud.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/nextcloud.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/nextcloud.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_nextcloud"
- name: Deploy nextcloud-cron service
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/systemd/{{ item }}"
dest: "{{ node['home_path'] }}/.config/systemd/user/{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
loop:
- "nextcloud-cron.service"
- "nextcloud-cron.timer"
- name: Enable nextcloud.service
ansible.builtin.systemd:
name: "nextcloud.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
- name: Enable nextcloud-cron.timer
ansible.builtin.systemd:
name: "nextcloud-cron.timer"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -0,0 +1,110 @@
---
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "sure"
- name: Create redis_sure directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "containers/redis"
- "containers/redis/{{ redis_service }}"
- "containers/redis/{{ redis_service }}/data"
become: true
- name: Deploy redis config file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.conf.j2"
dest: "{{ node['home_path'] }}/containers/redis/{{ redis_service }}/redis.conf"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/redis_{{ redis_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
name: "redis_{{ redis_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create sure directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['sure']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/sure"
- "data/containers/sure/storage"
- "containers/sure"
- "containers/sure/ssl"
become: true
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/sure/ssl/{{ root_cert_filename }}"
owner: "{{ services['paperless']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_sure"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "{{ item.name }}"
data: "{{ item.value }}"
state: "present"
force: true
loop:
- name: "SURE_SECRET_KEY_BASE"
value: "{{ hostvars['console']['sure']['session_secret'] }}"
- name: "SURE_POSTGRES_PASSWORD"
value: "{{ hostvars['console']['postgresql']['password']['sure'] }}"
- name: "SURE_OIDC_CLIENT_SECRET"
value: "{{ hostvars['console']['sure']['oidc']['secret'] }}"
notify: "notification_restart_sure"
no_log: true
- name: Deploy sure.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/sure/{{ item }}.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
loop:
- "sure-web.container"
- "sure-worker.container"
notify: "notification_restart_sure"
- name: Enable paperless.service
ansible.builtin.systemd:
name: "{{ item }}"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
loop:
- "sure-web.service"
- "sure-worker.service"
@@ -0,0 +1,38 @@
---
- name: Create trilium directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['trilium']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/trilium"
- "data/containers/trilium/data"
become: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "TRILIUM_OAUTH_CLIENT_SECRET"
data: "{{ hostvars['console']['trilium']['oidc']['secret'] }}"
state: "present"
force: true
notify: "notification_restart_trilium"
no_log: true
- name: Deploy trilium.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/trilium/trilium.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/trilium.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_trilium"
- name: Enable trilium.service
ansible.builtin.systemd:
name: "trilium.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -0,0 +1,53 @@
---
- name: Create wiki.js directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['wikijs']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/wikijs"
- "data/containers/wikijs/data"
- "data/containers/wikijs/export"
- "containers/wikijs"
- "containers/wikijs/ssl"
become: true
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/wikijs/ssl/{{ root_cert_filename }}"
owner: "{{ services['wikijs']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_wikijs"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "WIKIJS_DB_PASS"
data: "{{ hostvars['console']['postgresql']['password']['wikijs'] }}"
state: "present"
force: true
notify: "notification_restart_wikijs"
no_log: true
- name: Deploy wikijs.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/wikijs/wikijs.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/wikijs.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_wikijs"
- name: Enable wikijs.service
ansible.builtin.systemd:
name: "wikijs.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -5,9 +5,10 @@
- hardware
become: true
- name: Deploy alloy deb file (x86_64)
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/alloy-{{ version['packages']['alloy'] }}-amd64.deb"
- name: Download alloy deb file (x86_64)
ansible.builtin.get_url:
url: "https://github.com/grafana/alloy/releases/download/v{{ version['packages']['alloy'] }}/\
alloy-{{ version['packages']['alloy'] }}-1.amd64.deb"
dest: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb"
owner: "root"
group: "root"
@@ -15,9 +16,10 @@
become: true
when: ansible_facts['architecture'] == "x86_64"
- name: Deploy alloy deb file (aarch64)
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/alloy-{{ version['packages']['alloy'] }}-arm64.deb"
- name: Download alloy deb file (aarch64)
ansible.builtin.get_url:
url: "https://github.com/grafana/alloy/releases/download/v{{ version['packages']['alloy'] }}/\
alloy-{{ version['packages']['alloy'] }}-1.arm64.deb"
dest: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb"
owner: "root"
group: "root"
@@ -30,6 +32,7 @@
deb: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb"
state: "present"
become: true
notify: "notification_restart_alloy"
- name: Deploy alloy config
ansible.builtin.template:
@@ -36,10 +36,15 @@
ansible.builtin.set_fact:
acquisd_list:
fw:
collection: "crowdsecurity/suricata"
collection:
- "crowdsecurity/suricata"
parser: []
config: "suricata.yaml"
auth:
collection: "crowdsecurity/caddy"
collection:
- "crowdsecurity/caddy"
parser:
- "crowdsecurity/nextcloud-whitelist"
config: "caddy.yaml"
- name: Deploy crowdsec-update service files
@@ -181,7 +186,8 @@
block:
- name: Install crowdsec collection
ansible.builtin.command:
cmd: "cscli collections install {{ acquisd_list[node['name']]['collection'] }}"
cmd: "cscli collections install {{ item }}"
loop: "{{ acquisd_list[node['name']]['collection'] }}"
become: true
changed_when: "'overwrite' not in is_collection_installed.stderr"
failed_when:
@@ -189,6 +195,17 @@
- "'already installed' not in is_collection_installed.stderr"
register: "is_collection_installed"
- name: Install crowdsec parser
ansible.builtin.command:
cmd: "cscli parsers install {{ item }}"
loop: "{{ acquisd_list[node['name']]['parser'] }}"
become: true
changed_when: "'overwrite' not in is_parser_installed.stderr"
failed_when:
- is_parser_installed.rc != 0
- "'already installed' not in is_parser_installed.stderr"
register: "is_parser_installed"
- name: Create crowdsec acquis.d directory
ansible.builtin.file:
path: "/etc/crowdsec/acquis.d"
@@ -5,34 +5,36 @@
- hardware
become: true
- name: Check kopia installation
ansible.builtin.shell: |
command -v kopia
changed_when: false
failed_when: false
register: "is_kopia_installed"
ignore_errors: true
- name: Set console kopia
when: node['name'] == 'console'
block:
- name: Apply cli tools (x86_64)
- name: Download kopia
ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_{{ item }}.deb"
dest: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-{{ item }}.deb"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "amd64"
- "arm64"
- name: Install kopia (x86_64)
ansible.builtin.apt:
deb: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-amd64.deb"
state: "present"
become: true
when:
- ansible_facts['architecture'] == "x86_64"
- is_kopia_installed.rc != 0
- name: Apply cli tools (aarch64)
when: ansible_facts['architecture'] == "x86_64"
- name: Install kopia (aarch64)
ansible.builtin.apt:
deb: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-arm64.deb"
state: "present"
become: true
when:
- ansible_facts['architecture'] == "aarch64"
- is_kopia_installed.rc != 0
- name: Connect kopia server
when: ansible_facts['architecture'] == "aarch64"
- name: Connect console kopia server
environment:
KOPIA_PASSWORD: "{{ hostvars['console']['kopia']['user']['console'] }}"
ansible.builtin.shell: |
@@ -51,30 +53,36 @@
- name: Set kopia uid
ansible.builtin.set_fact:
kopia_uid: 951
- name: Deploy kopia deb file (x86_64)
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-amd64.deb"
- name: Download kopia deb file (x86_64)
ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_amd64.deb"
dest: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb"
owner: "root"
group: "root"
mode: "0644"
become: true
when: ansible_facts['architecture'] == "x86_64"
- name: Deploy kopia deb file (aarch64)
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-arm64.deb"
- name: Download kopia deb file (aarch64)
ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_arm64.deb"
dest: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb"
owner: "root"
group: "root"
mode: "0644"
become: true
when: ansible_facts['architecture'] == "aarch64"
- name: Create kopia group
ansible.builtin.group:
name: "kopia"
gid: "{{ kopia_uid }}"
state: "present"
become: true
- name: Create kopia user
ansible.builtin.user:
name: "kopia"
@@ -85,6 +93,7 @@
comment: "Kopia backup User"
state: "present"
become: true
- name: Create kopia directory
ansible.builtin.file:
path: "{{ item.name }}"
@@ -101,12 +110,13 @@
mode: "0700"
become: true
no_log: true
- name: Install kopia
ansible.builtin.apt:
deb: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb"
state: "present"
become: true
when: is_kopia_installed.rc != 0
- name: Deploy kopia env
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/common/kopia/kopia.env.j2"
@@ -116,6 +126,7 @@
mode: "0400"
become: true
no_log: true
- name: Deploy kopia service files
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/common/kopia/{{ item }}.j2"
@@ -128,6 +139,7 @@
- "kopia-backup.service"
- "kopia-backup.timer"
become: true
- name: Enable auto kopia rules update
ansible.builtin.systemd:
name: "kopia-backup.timer"
@@ -49,42 +49,6 @@
- "amd64"
- "arm64"
- name: Download kopia
ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_{{ item }}.deb"
dest: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-{{ item }}.deb"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "amd64"
- "arm64"
- name: Download blocky
ansible.builtin.get_url:
url: "https://github.com/0xERR0R/blocky/releases/download/v{{ version['packages']['blocky'] }}/\
blocky_v{{ version['packages']['blocky'] }}_Linux_{{ item }}.tar.gz"
dest: "{{ node['data_path'] }}/bin/blocky-{{ version['packages']['blocky'] }}-{{ item }}.tar.gz"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "x86_64"
- "arm64"
- name: Download alloy
ansible.builtin.get_url:
url: "https://github.com/grafana/alloy/releases/download/v{{ version['packages']['alloy'] }}/\
alloy-{{ version['packages']['alloy'] }}-1.{{ item }}.deb"
dest: "{{ node['data_path'] }}/bin/alloy-{{ version['packages']['alloy'] }}-{{ item }}.deb"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "amd64"
- "arm64"
- name: Apply cli tools (x86_64)
ansible.builtin.apt:
deb: "{{ node['data_path'] }}/bin/{{ item }}"
@@ -92,7 +56,6 @@
loop:
- "sops-{{ version['packages']['sops'] }}-amd64.deb"
- "step-{{ version['packages']['step'] }}-amd64.deb"
- "kopia-{{ version['packages']['kopia'] }}-amd64.deb"
become: true
when: ansible_facts['architecture'] == "x86_64"
@@ -103,6 +66,5 @@
loop:
- "sops-{{ version['packages']['sops'] }}-arm64.deb"
- "step-{{ version['packages']['step'] }}-arm64.deb"
- "kopia-{{ version['packages']['kopia'] }}-arm64.deb"
become: true
when: ansible_facts['architecture'] == "aarch64"
@@ -23,7 +23,7 @@
state: "present"
become: true
- name: Create blocky etc directory
- name: Create blocky directory
ansible.builtin.file:
path: "{{ item }}"
owner: "blocky"
@@ -31,13 +31,38 @@
mode: "0750"
state: "directory"
loop:
- "/home/blocky"
- "/home/blocky/bin"
- "/etc/blocky"
- "/etc/blocky/ssl"
become: true
- name: Download blocky (x86_64)
ansible.builtin.get_url:
url: "https://github.com/0xERR0R/blocky/releases/download/v{{ version['packages']['blocky'] }}/\
blocky_v{{ version['packages']['blocky'] }}_Linux_x86_64.tar.gz"
dest: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-x86_64.tar.gz"
owner: "blocky"
group: "blocky"
mode: "0600"
become: true
when: ansible_facts['architecture'] == "x86_64"
- name: Download blocky (aarch64)
ansible.builtin.get_url:
url: "https://github.com/0xERR0R/blocky/releases/download/v{{ version['packages']['blocky'] }}/\
blocky_v{{ version['packages']['blocky'] }}_Linux_arm64.tar.gz"
dest: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-arm64.tar.gz"
owner: "blocky"
group: "blocky"
mode: "0600"
become: true
when: ansible_facts['architecture'] == "aarch64"
- name: Deploy blocky binary file (x86_64)
ansible.builtin.unarchive:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/blocky-{{ version['packages']['blocky'] }}-x86_64.tar.gz"
src: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-x86_64.tar.gz"
remote_src: true
dest: "/usr/local/bin/"
owner: "root"
group: "root"
@@ -52,7 +77,8 @@
- name: Deploy blocky binary file (aarch64)
ansible.builtin.unarchive:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/blocky-{{ version['packages']['blocky'] }}-arm64.tar.gz"
src: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-arm64.tar.gz"
remote_src: true
dest: "/usr/local/bin/"
owner: "root"
group: "root"
+2 -2
View File
@@ -73,10 +73,10 @@
listen: "notification_restart_grafana"
ignore_errors: true # noqa: ignore-errors
- name: Enable x509-exporter.service
- name: Restart x509-exporter.service
ansible.builtin.systemd:
name: "x509-exporter.service"
state: "started"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
@@ -11,6 +11,10 @@
- "paperless"
- "vikunja"
- "affine"
- "nextcloud"
- "ezbookkeeping"
- "sure"
- "wikijs"
- name: Create postgresql directory
ansible.builtin.file:
@@ -8,9 +8,20 @@
mode: "0770"
loop:
- "x509-exporter"
- "x509-exporter/config"
- "x509-exporter/certs"
become: true
- name: Deploy config.yaml
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/x509-exporter/config/config.yaml"
dest: "{{ node['home_path'] }}/containers/x509-exporter/config/config.yaml"
owner: "{{ services['x509-exporter']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_x509-exporter"
- name: Deploy certificates
ansible.builtin.copy:
content: |
+2
View File
@@ -82,6 +82,8 @@ table inet filter {
chain global {
# invalid packets
ct state invalid drop comment "deny invalid connection"
# VPN connection exception handling
udp dport $PORTS_VPN return comment "return vpn connection to input and forward chain"
# crowdsec
ip saddr @crowdsec-blacklists counter drop comment "deny all crowdsec blacklist"
ip6 saddr @crowdsec6-blacklists counter drop comment "deny all ipv6 crowdsec blacklist"
+50 -2
View File
@@ -119,6 +119,10 @@ postgresql:
paperless: ENC[AES256_GCM,data:6VBrBbjVoam7SkZCSvoBTdrfkUoDghdGTiBmFLul04X/okXOHeC5zusJffY=,iv:iZumcJ3TWwZD77FzYx8THwCqC+EbnXUBrEKuPh3zgV8=,tag:u2m8SppAdxZ/duNdpuS3oQ==,type:str]
vikunja: ENC[AES256_GCM,data:/+wQdoFPTBG2elI9kZbAVWrHZ0DhMaYr4dc+2z9QNdb3TcDS2PEia0JuSAg=,iv:MViZTyUD8YqMmxSTWCQpJ30f/KQdQGOzPlRHHsQ8lAw=,tag:zov3POno139dkMxFDpj2gg==,type:str]
affine: ENC[AES256_GCM,data:XPXrcszsV06YqCJZ7CDqc4rCwqqNlbtLCFYfLAQ8jamLtft8L2UVrMA4WZo=,iv:vrWdBeckxB9tmEE628j4jhU+hSpE6TXYMGt0hh1Cg84=,tag:hlWwWUGht8NqWTZREMsa1Q==,type:str]
nextcloud: ENC[AES256_GCM,data:ROsximNuWYMTZktmLJPx7W1Qol/uT+APgwoCtFO/6ZYYc3KxKvlk344eqEc=,iv:4d+MrfIHjJKAcwhvZ3g4go66uZcieuL7lngKErJd+fg=,tag:QbWOtxeCbiu62GyrE2atXg==,type:str]
ezbookkeeping: ENC[AES256_GCM,data:CYYQ5DVr8Na46QduvUNF6d0XBVSXTml34q3/PhIYIvUNviOVgCjqXA4wN7g=,iv:qRljohJ+wI50XxSgMElKp65HyV3mKRTqDGjw9C1S0d0=,tag:PClp7PRmC0+PV0SzZpJqqQ==,type:str]
sure: ENC[AES256_GCM,data:FULJ2gjJ2gZC3s324itW+CjGRBHIP9RnOqw5TT1UaiUhb7UHAPm1na+LsZk=,iv:c0GnVZkxprJUzPPq3TCQaZvAes9QQuvDXqgVLLaiQIg=,tag:uDxy/Lkd2hNK4AWwMNMslw==,type:str]
wikijs: ENC[AES256_GCM,data:2drkkTevrcUrgxOHavIEPcemc2l5+/3GEAYNCYVL/63daVda5tzL61tPm2A=,iv:87qPrlRaosXO75eaxo4xjevVc1Pt9MiHv6lYFBB3MKU=,tag:SnVbVR4ZM0qvVmWpcgSKrg==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
@@ -255,6 +259,50 @@ affine:
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:PZS7EbvMHqHGorNUGAWj4dk1,iv:vOE+djRAvBTMM51kHi6kG5Arw3uPXlJt1d/BpcEaD0c=,tag:AuoCHLQz42CYvVVdKFWu1Q==,type:comment]
nextcloud:
admin-local:
password: ENC[AES256_GCM,data:mIwF5A09oqYbdK3bOKid9A896Q5J5Q6Ax+vDNqEJFGNdzd/mJ4oQS6rva+s=,iv:QroUMST2wnEJzk6DySe9tPZaWuqdxzJZ0+oi6mW6x00=,tag:3UTzjupK7+omrI3Hvyr8bA==,type:str]
oidc:
secret: ENC[AES256_GCM,data:Sr4KkKkYdkU0UWdpfUF7PyiGoerjBiw+sOFcENyLxw0FRXGG0Y8gv5uGb4Q=,iv:LbGsNM3+iY7bWFQe88TepVKUdiRQWZ+K7Ubn6ze6lV4=,tag:SbcfIAMW9ZprgahOFU4IQQ==,type:str]
hash: ENC[AES256_GCM,data:CkstbIYQmi72QhsbJZN0lQedgCn7TmGpYcYj0n+NvJIoTlol8G9N/88cwGbVoGK9nEISv54FL94cEJFppnMIuj0BHrhasrZsyI2/Lj52YLWdwNJWNQ+iYt+Ifp/1kI0zqmdoajzZ5DS2w/1evCBC1+JdfTRlpVXmSsHUIPIHelBRj90=,iv:vwvT5TTkF4woxXOvrRRqmrdLXf19s47NIDtdT+zLp0U=,tag:KC0MS0DTH6j3zIHOjCFOSA==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:tMahvC9OLW4+AGLyx68SNsOPBezApw==,iv:WHx8ruuQ33J/8XtwyhvDy2cKqE7lAWvj/r5AUhdyssU=,tag:uRwheXUxqNSIhcPqGeMNog==,type:comment]
ezbookkeeping:
oidc:
secret: ENC[AES256_GCM,data:ZMIfRwXDT1ujGKoc7DGvc8/O+ciB+kajo9yOwVsMsbEjl6D8gl6I0Lbsta8=,iv:++p1TTW6gDUEvh56SjMgldrpob/VWNtiYGo6wNS8cz0=,tag:LQaW333UskiN4mtIjUAguA==,type:str]
hash: ENC[AES256_GCM,data:XyB1N3MUzBHWHAumat7/ASy/Aja/gLKmeTriOqLnMgZ9lBE1birYtFW+R0wZ+vyx79tHKVnRxzrWsxoD5jitCmHyMVrJmJKl5c4SYMhytKfBPgrNe3twcc06U+wONmgAuVpaEQlnnyzAz42SpOHbT55GegHjYzT5hXax8eRvdM6xJSY=,iv:R4+EdQuKo2JumY3cu8KPpeFezcLhlehXBxr2wVG5wHk=,tag:hpDX1x9NCCutUsnDKEf1Sg==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:Fsqc2JDp9dvfgiCjdQ==,iv:3DALKKEXaP8hzXRvxD4CgfFpOiPPsOa16OB94n8WKp8=,tag:K+FF3zGrc0YLXWK/R2L3Ow==,type:comment]
sure:
session_secret: ENC[AES256_GCM,data:InHsz/jld8E9TwI8MWpxk9x2I7dxlIsY9R6jtDK2pBA=,iv:HY5yXEC2Dce26e9/vXTIWELvVd9ZjhcCwFD0jhz5pPw=,tag:LLSJovZ0RH3CUK+se7R4Ag==,type:str]
oidc:
secret: ENC[AES256_GCM,data:9BSvpcU9BJctSN9bkPIAsRxg8JNHTWvOKdpJFhm//CUDm/Xc7oC/ANHf5no=,iv:JVQLl/rp65kZSK/4SpVXxtiac3Z35XNkxWm2+lEdq/c=,tag:WgfaORiNlrO+wHSdnl4CWQ==,type:str]
hash: ENC[AES256_GCM,data:EjJ+1fP7/9wG2jG0Jv2hxMLtErqxjHBstRjru79dd5ZXhqwT7S+jpLfl9WpZU9qi20ps9YP4qe7G08p6NJNXjYhQj852GQxEORRh/9StAZsPt3p8w+ePZSVbivPQH+FpPKWYxoH0VR7y3TnL66R0tKRLh1fNTc5jRy5rU5r1bfs1jZ0=,iv:0y9FxW4QdD7qHz3bPRWlwHFpvOsvlYhVrOItB6BzaE8=,tag:Wc7MZhP3QRYmvZcjpoEWtQ==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:NkvAsD10P7qUvGPXeTY+rQ==,iv:GjsUk3Ht6RYW/rhkRhMSFEmtsAiS+dK7niYDJVBj2iE=,tag:8KnDcuRTm7P76Kh2hmWeXw==,type:comment]
wikijs:
il: ENC[AES256_GCM,data:gsAEHk4MI75EXIiqdb05RYSmlxaQ7mlYXTwTYYVJ20KC397T6xbHzvNojlI=,iv:iYc+BahiJ50LSr35/T1VCQsxsRen5rKLwQhfVQMkdz4=,tag:rscWcLWyTaSR4KEPJaes2A==,type:str]
oidc:
secret: ENC[AES256_GCM,data:+bmvyUkiQ+vnaJW7wgjohv4wdvliqx8whdSM8iBUJXGFy/QOs2oJm4FoUcA=,iv:U07y/+87zbXQ2hQ4HvzKcEH5nQsaSIF1Oh3yv6/ytWU=,tag:knGwjGhH5D/OSvW6j5S0VQ==,type:str]
hash: ENC[AES256_GCM,data:7jKBt9mdfxKDU6vBIP6k/wj0gIsRnLwwSrLOlnbbiNZVmbZXqv/UxEsLxCyx1rP2mzGgaxNCBh6WOo7mbSMPezMiuf/enrNrmIwpcP2R0H6LxGTiLFk/7EZ493oy7qFmmsM2qM7Y6qhhKUygD4XbJfVZ2sdojjIGAWy6XdpbbQICb5I=,iv:N3gPga+iDYUF0uAx671DP+4c7FYUKP12MEbYmKZRPAI=,tag:7tKwhxk5yQ0KfZrg0+v/rw==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:rf52AKZDCNq9PVnAMnDXzw==,iv:+rT8sgcAz0LoeUcPgIrpSw/JWvk5agunnTkaWac16kU=,tag:SCyTu1rUNnmS2EFMeIvlCw==,type:comment]
trilium:
oidc:
secret: ENC[AES256_GCM,data:EfKdxk/OBgQyGVwOnxMFS/HhucL5qicaB7HfWu4yNvmrqxU+ubkT62zJewQ=,iv:Ye4gNbyOuEaujGfxXYKg4GWDOP+cnTNL230t8B98WUY=,tag:B1YoabR7y8OVUKYj/aiSPA==,type:str]
hash: ENC[AES256_GCM,data:QyU+leT28FY3nW+tIbnap2n52xw1bcb77ziFf6cW9gdwwhL6rJCEaTGQritpVsCH5C9ytxlV0Acn7dJbnYSHFtZ2jbuvYMSQR4ewtY+tFX1MdD9+FmtH8umb7PHbG6upXgrXRNRIglJ4U1BEfg0xkdzEPbJq+r13A1+cKESrewayae4=,iv:CUE6YjDzgoc017e8+dT1S956PwmOlb7h6dhnOpCr3iw=,tag:XGgpzuVZXJ8Axb4ib8anVQ==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:T4Wtn49AAxPd2QUFTR+q,iv:bH5goGWBDqumAat9dUv2OwfCUJUpuVqncTMqMBZUXhI=,tag:G+W6hHA+yftQ+4RJpXrxHg==,type:comment]
switch:
password: ENC[AES256_GCM,data:qu0f9L7A0eFq/UCpaRs=,iv:W8LLOp3MSfd/+EfNEZNf91K8GgI5eUfVPoWTRES2C0Y=,tag:Q5FlAOfwqwJwPvd7k6i+0g==,type:str]
@@ -284,7 +332,7 @@ sops:
UmliaFNxVTBqRkI1QWJpWGpTRWxETW8KEY/8AfU73UOzCGhny1cNnd5dCNv7bHXt
k+uyWPPi+enFkVaceSwMFrA66uaWWrwAj11sXEB7yzvGFPrnAGezjQ==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2026-04-06T14:32:22Z"
mac: ENC[AES256_GCM,data:OFiSsBBAzOUoOwnAwhaplQQ8k2kUo+Avzk475BpaiOJoaB2c0wsJ3siP15tcLMrav4Qw8boZFo64v+rjdMoNI/MRo1EOYWNr1ZRMqHzwmQeaiMH2QcfoRZ0oLqrn5ekQztuPR9ULjDYZb63AwVGmzseUf4R5lGXgdgN5tjU/pH4=,iv:hqzDwryMuJ7JnkBazzDSznw05m7k61Sk61aPgO3JtpU=,tag:Lhhlgwy+YuQ1S0hkbsjecg==,type:str]
lastmodified: "2026-05-09T12:29:30Z"
mac: ENC[AES256_GCM,data:ql3rWwdwJRn2nH0SLnjTaJK4NVemxG8T814VEDaHv38bc7A3aaMGuZ92mHY4z+5oNA+DpR/UjkGJ/NrckbURxY63BEcyVCsS4Rb95HTKjDOjf2g5rrohdgI3ZUE1jvlyf3tAh2ZYh1J8QddLKyLju/J43KcB+XRQKhJv4kubAQ0=,iv:4inRbBMuhB7Hzi8fGpqyC3juUqteZGLXX0GtnHusF7Y=,tag:ZxJ6iv8NxJr4rvCInml8dg==,type:str]
unencrypted_suffix: _unencrypted
version: 3.12.1
@@ -0,0 +1,25 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Collabora Online
[Container]
Image=docker.io/collabora/code:{{ version['containers']['collabora'] }}
ContainerName=collabora
HostName=collabora
PublishPort={{ services['collabora']['ports']['http'] }}:9980/tcp
Environment="TZ=Asia/Seoul"
Environment="aliasgroup1=https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}"
# Environment="aliasgroup2=other_server_FQDN"
Environment="extra_params=--o:ssl.enable=false --o:ssl.termination=true --o:server_name={{ services['collabora']['domain']['public'] }}.{{ domain['public'] }} --o:admin_console.enable=false"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,61 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=ezBookkeeping
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/mayswind/ezbookkeeping:{{ version['containers']['ezbookkeeping'] }}
ContainerName=ezbookkeeping
HostName=ezbookkeeping
PublishPort={{ services['ezbookkeeping']['ports']['http'] }}:8080/tcp
Volume=%h/data/containers/ezbookkeeping/data:/data:rw
Volume=%h/containers/ezbookkeeping/ssl:/etc/ssl/ezbookkeeping:ro
# General
Environment="TZ=Asia/Seoul"
Environment="EBK_SERVER_DOMAIN={{ services['ezbookkeeping']['domain']['public'] }}.{{ domain['public'] }}"
Environment="EBK_SERVER_ROOT_URL=https://{{ services['ezbookkeeping']['domain']['public'] }}.{{ domain['public'] }}/"
Environment="EBK_LOG_MODE=console"
# Database
Environment="EBK_DATABASE_TYPE=postgres"
Environment="EBK_DATABASE_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}"
Environment="EBK_DATABASE_NAME=ezbookkeeping_db"
Environment="EBK_DATABASE_USER=ezbookkeeping"
Secret=EBK_DATABASE_PASSWD,type=env
Environment="EBK_DATABASE_SSL_MODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/ezbookkeeping/{{ root_cert_filename }}"
# OIDC
Environment="EBK_AUTH_ENABLE_OAUTH2_AUTH=true"
Environment="EBK_AUTH_OAUTH2_PROVIDER=oidc"
Environment="EBK_AUTH_OAUTH2_CLIENT_ID=ezbookkeeping"
Secret=EBK_AUTH_OAUTH2_CLIENT_SECRET,type=env
Environment="EBK_AUTH_OAUTH2_USE_PKCE=true"
Environment="EBK_AUTH_OIDC_PROVIDER_BASE_URL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="EBK_AUTH_ENABLE_OIDC_DISPLAY_NAME=true"
Environment="EBK_AUTH_OIDC_CUSTOM_DISPLAY_NAME=Authelia"
# Registration / auth policy
Environment="EBK_AUTH_ENABLE_INTERNAL_AUTH=false"
Environment="EBK_USER_ENABLE_REGISTER=true"
Environment="EBK_AUTH_OAUTH2_AUTO_REGISTER=true"
# AI / MCP disabled by default
Environment="EBK_MCP_ENABLE_MCP=false"
Environment="EBK_LLM_TRANSACTION_FROM_AI_IMAGE_RECOGNITION=false"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,5 @@
<?php
$CONFIG = [
// Background jobs mode is auto-detected as 'cron' when nextcloud-cron.timer runs cron.php via CLI. No explicit config required.
'maintenance_window_start' => 18,
];
@@ -0,0 +1,12 @@
<?php
$CONFIG = [
'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
'host' => 'host.containers.internal',
'port' => {{ services['nextcloud']['ports']['redis'] }},
'timeout' => 1.5,
'dbindex' => 0,
],
];
@@ -0,0 +1,9 @@
<?php
$CONFIG = [
'trusted_domains' => [
'{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
],
'overwritehost' => '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
'overwriteprotocol' => 'https',
'overwrite.cli.url' => 'https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
];
@@ -0,0 +1,4 @@
<?php
$CONFIG = [
'allow_local_remote_servers' => true,
];
@@ -0,0 +1,9 @@
<?php
$CONFIG = [
'user_oidc' => [
'default_token_endpoint_auth_method' => 'client_secret_post',
'auto_provision' => true,
'soft_auto_provision' => true,
'disable_account_creation' => false,
],
];
@@ -0,0 +1,14 @@
; /usr/local/etc/php/conf.d/opcache-recommended.ini
; OPcache tuning
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=32
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.revalidate_freq=60
opcache.fast_shutdown=1
; APCu CLI activate
apc.enable_cli=1
@@ -0,0 +1,6 @@
; /usr/local/etc/php/conf.d/nextcloud-upload.ini
upload_max_filesize=16G
post_max_size=16G
memory_limit=1024M
max_execution_time=3600
max_input_time=3600
@@ -0,0 +1,36 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Nextcloud
[Container]
Image=docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}
ContainerName=nextcloud
HostName=nextcloud
PublishPort={{ services['nextcloud']['ports']['http'] }}:80
Volume=%h/containers/nextcloud/ssl:/etc/ssl/nextcloud:ro
Volume=%h/containers/nextcloud/ini/opcache.ini:/usr/local/etc/php/conf.d/opcache-recommended.ini:ro
Volume=%h/containers/nextcloud/ini/upload.ini:/usr/local/etc/php/conf.d/upload.ini:ro
Volume=%h/data/containers/nextcloud/html:/var/www/html:rw
# General
Environment="TZ=Asia/Seoul"
# PostgreSQL
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/nextcloud/{{ root_cert_filename }}"
## libpq in Nextcloud automatically tries to use a client certificate for mTLS. Therefore, when only TLS is required, then disable the option explicitly.
Environment="PGSSLCERTMODE=disable"
# Redis
Environment="REDIS_HOST=host.containers.internal"
Environment="REDIS_HOST_PORT={{ services['nextcloud']['ports']['redis'] }}"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,8 @@
[Unit]
Description=Nextcloud cron.php
Requires=nextcloud.service
After=nextcloud.service
[Service]
Type=oneshot
ExecStart=/usr/bin/podman exec -u www-data nextcloud php -f /var/www/html/cron.php
@@ -0,0 +1,10 @@
[Unit]
Description=Run Nextcloud cron every 5 minutes
[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloud-cron.service
[Install]
WantedBy=timers.target
@@ -0,0 +1,67 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Sure Web
After=network-online.target redis_sure.service
Wants=network-online.target redis_sure.service
[Container]
Image=ghcr.io/we-promise/sure:{{ version['containers']['sure'] }}
ContainerName=sure-web
HostName=sure-web
PublishPort={{ services['sure']['ports']['http'] }}:3000/tcp
Volume=%h/data/containers/sure/storage:/rails/storage:rw
Volume=%h/containers/sure/ssl:/etc/ssl/sure:ro
# General
Environment="TZ=Asia/Seoul"
Environment="SELF_HOSTED=true"
Environment="ONBOARDING_STATE=closed"
Environment="RAILS_FORCE_SSL=false"
Environment="RAILS_ASSUME_SSL=true"
Environment="APP_DOMAIN={{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
Secret=SURE_SECRET_KEY_BASE,type=env,target=SECRET_KEY_BASE
# PostgreSQL
Environment="POSTGRES_USER=sure"
Environment="POSTGRES_DB=sure_db"
Environment="DB_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="DB_PORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/sure/{{ root_cert_filename }}"
Secret=SURE_POSTGRES_PASSWORD,type=env,target=POSTGRES_PASSWORD
# Redis
Environment="REDIS_URL=redis://host.containers.internal:{{ services['sure']['ports']['redis'] }}/1"
# OIDC - Authelia
Environment="OIDC_CLIENT_ID=sure"
Environment="OIDC_ISSUER=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="OIDC_REDIRECT_URI=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}/auth/openid_connect/callback"
Secret=SURE_OIDC_CLIENT_SECRET,type=env,target=OIDC_CLIENT_SECRET
Environment="OIDC_BUTTON_LABEL=Sign in with Authelia"
Environment="AUTH_JIT_MODE=create_and_link"
# email's domain, e.g. ilnmors.internal then only user@ilnmors.internal is allowed to sign-up
Environment="ALLOWED_OIDC_DOMAINS="
# WebAuthn / Passkey
Environment="WEBAUTHN_RP_ID={{ domain['public'] }}"
Environment="WEBAUTHN_ALLOWED_ORIGINS=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
# Provider
## Currency
Environment="EXCHANGE_RATE_PROVIDER=yahoo_finance"
Environment="SECURITIES_PROVIDER=yahoo_finance"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,67 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Sure Worker
After=network-online.target redis_sure.service
Wants=network-online.target redis_sure.service
[Container]
Image=ghcr.io/we-promise/sure:{{ version['containers']['sure'] }}
ContainerName=sure-worker
HostName=sure-worker
Volume=%h/data/containers/sure/storage:/rails/storage:rw
Volume=%h/containers/sure/ssl:/etc/ssl/sure:ro
Exec=bundle exec sidekiq
# General
Environment="TZ=Asia/Seoul"
Environment="SELF_HOSTED=true"
Environment="ONBOARDING_STATE=closed"
Environment="RAILS_FORCE_SSL=false"
Environment="RAILS_ASSUME_SSL=true"
Environment="APP_DOMAIN={{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
Secret=SURE_SECRET_KEY_BASE,type=env,target=SECRET_KEY_BASE
# PostgreSQL
Environment="POSTGRES_USER=sure"
Environment="POSTGRES_DB=sure_db"
Environment="DB_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="DB_PORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/sure/{{ root_cert_filename }}"
Secret=SURE_POSTGRES_PASSWORD,type=env,target=POSTGRES_PASSWORD
# Redis
Environment="REDIS_URL=redis://host.containers.internal:{{ services['sure']['ports']['redis'] }}/1"
# OIDC - Authelia
Environment="OIDC_CLIENT_ID=sure"
Environment="OIDC_ISSUER=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="OIDC_REDIRECT_URI=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}/auth/openid_connect/callback"
Secret=SURE_OIDC_CLIENT_SECRET,type=env,target=OIDC_CLIENT_SECRET
Environment="OIDC_BUTTON_LABEL=Sign in with Authelia"
Environment="AUTH_JIT_MODE=create_and_link"
# email's domain, e.g. ilnmors.internal then only user@ilnmors.internal is allowed to sign-up
Environment="ALLOWED_OIDC_DOMAINS="
# WebAuthn / Passkey
Environment="WEBAUTHN_RP_ID={{ domain['public'] }}"
Environment="WEBAUTHN_ALLOWED_ORIGINS=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
# Provider
## Currency
Environment="EXCHANGE_RATE_PROVIDER=yahoo_finance"
Environment="SECURITIES_PROVIDER=yahoo_finance"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,46 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=trilium
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/triliumnext/trilium:{{ version['containers']['trilium'] }}
ContainerName=trilium
HostName=trilium
PublishPort={{ services['trilium']['ports']['http'] }}:8080/tcp
Volume=%h/data/containers/trilium/data:/home/node/trilium-data:rw
# General
Environment="TZ=Asia/Seoul"
Environment="TRILIUM_DATA_DIR=/home/node/trilium-data"
Environment="TRILIUM_NO_UPLOAD_LIMIT=true"
# OIDC
## Short Alias doesn't work now.
#Environment="TRILIUM_OAUTH_BASE_URL=https://{{ services['trilium']['domain']['public'] }}.{{ domain['public'] }}"
#Environment="TRILIUM_OAUTH_CLIENT_ID=trilium"
#Environment="TRILIUM_OAUTH_ISSUER_BASE_URL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
#Environment="TRILIUM_OAUTH_ISSUER_NAME=Authelia"
#Environment="TRILIUM_OAUTH_ISSUER_ICON=https://www.authelia.com/images/branding/logo-cropped.png"
#Secret="TRILIUM_OAUTH_CLIENT_SECRET",type=env
Environment="TRILIUM_MULTIFACTORAUTHENTICATION_OAUTHBASEURL=https://{{ services['trilium']['domain']['public'] }}.{{ domain['public'] }}"
Environment="TRILIUM_MULTIFACTORAUTHENTICATION_OAUTHCLIENTID=trilium"
Environment="TRILIUM_MULTIFACTORAUTHENTICATION_OAUTHISSUERBASEURL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="TRILIUM_MULTIFACTORAUTHENTICATION_OAUTHISSUERNAME=Authelia"
Environment="TRILIUM_MULTIFACTORAUTHENTICATION_OAUTHISSUERICON=https://www.authelia.com/images/branding/logo-cropped.png"
Secret="TRILIUM_OAUTH_CLIENT_SECRET",type=env,target=TRILIUM_MULTIFACTORAUTHENTICATION_OAUTHCLIENTSECRET
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,41 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Wiki.js
After=network-online.target
Wants=network-online.target
[Container]
Image=ghcr.io/requarks/wiki:{{ version['containers']['wikijs'] }}
ContainerName=wikijs
HostName=wikijs
PublishPort={{ services['wikijs']['ports']['http'] }}:3000/tcp
# Volumes
Volume=%h/data/containers/wikijs/data:/wiki/data:rw
Volume=%h/data/containers/wikijs/export:/wiki/export:rw
Volume=%h/containers/wikijs/ssl:/etc/ssl/wiki:ro
# General
Environment="TZ=Asia/Seoul"
# Database
Environment="DB_TYPE=postgres"
Environment="DB_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="DB_PORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="DB_USER=wikijs"
Environment="DB_NAME=wikijs_db"
Environment="DB_SSL=true"
Environment="NODE_EXTRA_CA_CERTS=/etc/ssl/wiki/{{ root_cert_filename }}"
Secret=WIKIJS_DB_PASS,type=env,target=DB_PASS
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -93,6 +93,14 @@ notifier:
identity_providers:
oidc:
hmac_secret: '' # $AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
claims_policies:
# trilium expects name/email value in id token, but authelia doesn't send it basically
trilium:
id_token:
- email
- email_verified
- preferred_username
- name
# For the app which doesn't use secret.
cors:
endpoints:
@@ -365,3 +373,114 @@ identity_providers:
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/nextcloud/#openid-connect-user-backend-app
- client_id: 'nextcloud'
client_name: 'Nextcloud'
client_secret: '{{ hostvars['console']['nextcloud']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}/apps/user_oidc/code'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/ezbookkeeping/
- client_id: 'ezbookkeeping'
client_name: 'ezBookkeeping'
client_secret: '{{ hostvars['console']['ezbookkeeping']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['ezbookkeeping']['domain']['public'] }}.{{ domain['public'] }}/oauth2/callback'
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
# https://www.authelia.com/integration/openid-connect/clients/sure/
- client_id: 'sure'
client_name: 'Sure'
client_secret: '{{ hostvars['console']['sure']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}/auth/openid_connect/callback'
scopes:
- 'openid'
- 'email'
- 'profile'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
# https://www.authelia.com/integration/openid-connect/clients/wikijs/
- client_id: 'wikijs'
client_name: 'Wiki'
client_secret: '{{ hostvars['console']['wikijs']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
# add Callback URL / Redirect URI HERE
- 'https://{{ services['wikijs']['domain']['public'] }}.{{ domain['public'] }}/login/aa72242e-7058-4cfa-9504-19a4208062ea/callback' # Note this must be copied during step 7 of the Application configuration.
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/trillium/
# The name is trilium, not trillium
- client_id: 'trilium'
client_name: 'Trilium Notes'
client_secret: '{{ hostvars['console']['trilium']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
# claims policy above
claims_policy: 'trilium'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://{{ services['trilium']['domain']['public'] }}.{{ domain['public'] }}/callback'
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
@@ -77,3 +77,39 @@
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['nextcloud']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['nextcloud']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['collabora']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['collabora']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['ezbookkeeping']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['ezbookkeeping']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['sure']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['sure']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['wikijs']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['wikijs']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['trilium']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['trilium']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
@@ -136,6 +136,60 @@
}
}
}
{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['nextcloud']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['collabora']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['collabora']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['ezbookkeeping']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['ezbookkeeping']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['sure']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['sure']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['wikijs']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['wikijs']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['trilium']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['trilium']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
# Internal domain
{{ node['name'] }}.{{ domain['internal'] }} {
@@ -22,14 +22,17 @@ Volume=%h/containers/ca/db:/home/step/db:rw
Volume=%h/containers/ca/templates:/home/step/templates:rw
Environment="TZ=Asia/Seoul"
Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD"
# Since 0.30.0, Docker CMD no longer expands PWDPATH.
#Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD"
Secret=STEP_CA_PASSWORD,target=/run/secrets/STEP_CA_PASSWORD
Exec=/usr/local/bin/step-ca --password-file /run/secrets/STEP_CA_PASSWORD /home/step/config/ca.json
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
WantedBy=default.target
@@ -0,0 +1,11 @@
server:
listen: :9793
sources:
- kind: file
name: homelab-certs
paths:
- /certs/*.crt
- /certs/*.pem
- /certs/*.cer
refreshInterval: 1m
@@ -11,11 +11,12 @@ Image=docker.io/enix/x509-certificate-exporter:{{ version['containers']['x509-ex
ContainerName=x509-exporter
HostName=X509-exporter
Volume=%h/containers/x509-exporter/config/config.yaml:/etc/config.yaml:ro
Volume=%h/containers/x509-exporter/certs:/certs:ro
PublishPort={{ services['x509-exporter']['ports']['http'] }}:9793
Exec=--listen-address :9793 --watch-dir=/certs
Exec=--config /etc/config.yaml
[Service]
Restart=always
@@ -0,0 +1,10 @@
[Unit]
Description=BTRFS auto scrub
ConditionPathIsMountPoint={{ storage['btrfs']['mount_point'] }}
RequiresMountsFor={{ storage['btrfs']['mount_point'] }}
[Service]
Type=oneshot
ExecStart=/usr/bin/btrfs scrub start -Bd {{ storage['btrfs']['mount_point'] }}
Nice=19
IOSchedulingClass=idle
@@ -0,0 +1,10 @@
[Unit]
Description=Monthly BTRFS auto scrub
[Timer]
OnCalendar=*-*-01 04:00:00
Persistent=true
RandomizedDelaySec=300
[Install]
WantedBy=timers.target
@@ -13,9 +13,11 @@ whitelist:
{% if node['name'] == 'auth' %}
expression:
# budget local-first sql scrap rule
- "evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/data/migrations/'"
- "evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status in ['200', '304'] && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/data/migrations/'"
# immich thumbnail request 404 error false positive
- "evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'"
- "evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'"
# opencloud chunk request false positive
- "evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/js/chunks/'"
- "evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status in ['200', '304'] && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/js/chunks/'"
# nextcloud thumbnail/preview request error false positive
- "evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path startsWith '/index.php/core/preview?'"
{% endif %}
@@ -21,9 +21,9 @@ ProtectHome=tmpfs
InaccessiblePaths=/boot /root
{% if node['name'] == 'infra' %}
BindReadOnlyPaths=%h/containers/postgresql/backups
BindReadOnlyPaths={{ node['home_path'] }}/containers/postgresql/backups
{% elif node['name'] == 'app' %}
BindReadOnlyPaths=%h/data
BindReadOnlyPaths={{ node['home_path'] }}/data
{% endif %}
# In root namescope, %u always bring 0
BindPaths=/etc/kopia
@@ -38,10 +38,10 @@ ExecStartPre=/usr/bin/kopia repository connect server \
{% if node['name'] == 'infra' %}
ExecStart=/usr/bin/kopia snapshot create \
/home/infra/containers/postgresql/backups
{{ node['home_path'] }}/containers/postgresql/backups
{% elif node['name'] == 'app' %}
ExecStart=/usr/bin/kopia snapshot create \
/home/app/data
{{ node['home_path'] }}/data
{% endif %}
+7 -7
View File
@@ -14,22 +14,22 @@
## Context
- Maintaining multi nodes requires a huge amount of resources, including hardware, electricity, even administrative efforts
- All units which responsible for a single role should follow the Principle of Least Privilege \(PoLP\).
- All units which responsible for a single role should follow the Principle of Least Privilege (PoLP).
- All units should be interchangeable on standard to avoid vendor lock-in.
## Consideration
### Hypervisor
- Proxmox Virutal Environment \(PVE\)
- Proxmox Virutal Environment (PVE)
- Based on Debian.
- PVE uses `qm` command which is not a standard to implement the virtual environment.
- VMware ESXi
- Based on UNIX, deveoped by VMware \(Licence is not free\)
- Based on UNIX, deveoped by VMware (Licence is not free)
- Hyper-V
- Based on Microsoft Windows \(Licence is not free\)
- Based on Microsoft Windows (Licence is not free)
- Debian Stable
- Based on standard linux \(conservative\)
- Based on standard linux (conservative)
- Standard virtualization technology 'Libvirt, QEMU, KVM'
### Container
@@ -37,7 +37,7 @@
- Docker
- Daemon is used to run containers
- Root authority required
- Socket and network problem is complex \(Docker bridge\)
- Socket and network problem is complex (Docker bridge)
- docker-compose is an orchestration tool
- Rootless Podman
- Daemonless design
@@ -58,7 +58,7 @@
## Decisions
- Use Libvirt/KVM/QEMU on pure linux \(Debian stable\).
- Use Libvirt/KVM/QEMU on pure linux (Debian stable).
- Separate all services by VM, and podman rootless containers without K3S.
- Orchestration stack is not needed in single node system
- Services will be defined by Quadelt to integrate into systemd and to manage them declaratively
+5 -5
View File
@@ -23,15 +23,15 @@
- OPNSense/pfSense
- vendor lock-in
- GUI environment \(WebGUI\) can contain vulnerability
- GUI environment (WebGUI) can contain vulnerability
- It is hard to manage configurations by IaC
- iptables
- Previous standard of Linux
- IPv4 and IPv6 configuration is separated \(no inet\)
- IPv4 and IPv6 configuration is separated (no inet)
- nftables
- New standard of Linux
- English grammar friendly
- IPv4 and IPv6 configuration can be set on the same table \(inet\)
- IPv4 and IPv6 configuration can be set on the same table (inet)
### Flat network structure
- LAN only
@@ -48,8 +48,8 @@
- VLAN 20: user (DHCP allocated devices)
- wg0: VPN connections
- Manage the rules based on roles fundamentally, furthermore manage them based on ip and ports when it is needed
- All L3 communication which needs to pass gateway should be on control of firewall \(fw\)
- All nodes including firewall uses nftables \(modern standard\) to manage the packets based on zone concept
- All L3 communication which needs to pass gateway should be on control of firewall (fw)
- All nodes including firewall uses nftables (modern standard) to manage the packets based on zone concept
- IPv6 has two track strategy
- Client and server, wg nodes has static ULA IP, and use NAT66 for permanency
- User nodes has GUA SLAAC IP from ISP for compatibility
+7 -7
View File
@@ -24,7 +24,7 @@
### Automate protocol
- JWK/JWT provisioner
- It is hard to manage pre-shared secret values than ACME \(Especially nsupdate\)
- It is hard to manage pre-shared secret values than ACME (Especially nsupdate)
- authorized_keys
- When the nodes are increased, it is hard to manage authorized_key.
- SSH ca.pub allow all the certificates signed by ca key, so it is not needed to manage authroized_keys from each hosts.
@@ -39,19 +39,19 @@
## Decisions
- Operate private CA
- Root CA \(Store on coldstorage\) - 10 years
- Intermediate CA \(Online server as Step-CA\) - 5 years
- Root CA (Store on coldstorage) - 10 years
- Intermediate CA (Online server as Step-CA) - 5 years
- SSH CA - No period
- Manage certificates with two track
- ACME with nsupdate \(using private DNS\) for web services via Caddy - 90 days
- ACME with nsupdate (using private DNS) for web services via Caddy - 90 days
- Manual issuing and managing leaf certificate for infra services for independency - 2.5 years
- All manual issuing leaf certificate expiry date is observed by x509-exporter on infra vm
- Manage SSH certificates
- *-cert.pub for host \(with -h options\)
- *-cert.pub for client \(without -h options\)
- *-cert.pub for host (with -h options)
- *-cert.pub for client (without -h options)
## Consequences
- Private PKI is operated
- Private SSH CA is operated
- All external/internal communication is encrypted as TLS re-encryption. \(E2EE\)
- All external/internal communication is encrypted as TLS re-encryption. (E2EE)
+4 -4
View File
@@ -12,9 +12,9 @@
## Context
- Private authoritative DNS is required to use private reserved root domain \(.internal\)
- Private authoritative DNS is required to use private reserved root domain (.internal)
- Split horizon DNS needs DNS resolver, because authoritative DNS must not send queries to other DNS.
- Automatical issuing certificates needs private authoritative DNS which supports nsupdate \(RFC 2136\)
- Automatical issuing certificates needs private authoritative DNS which supports nsupdate (RFC 2136)
## Consideration
@@ -22,13 +22,13 @@
- AdGuard Home
- More powerful query routing than blocky
- Web UI dependency
- Extra function which is not useful \(DHCP, etc ..\)
- Extra function which is not useful (DHCP, etc ..)
- Unbound DNS
- Cache and forward zone management is powerful
- more complex than blocky
- cache function is not that needed in this environment
- Internal authoritative DNS only takes charge of internal communication
- All security function is delegated to public DNS like cloudflare \(DNSSEC, etc\)
- All security function is delegated to public DNS like cloudflare (DNSSEC, etc)
## Decisions
+3 -3
View File
@@ -29,8 +29,8 @@
- Regex based log parsing is less structured than CrowdSec's parser/scenario model
- Crowdsec
- Community based rules and sinario \(CAPI\)
- Prevention based on local machines and parsers \(LAPI\)
- Community based rules and sinario (CAPI)
- Prevention based on local machines and parsers (LAPI)
- Bouncers can use nftables to prevent threats
- Parser can detect even L7 attack under TLS
@@ -43,7 +43,7 @@
- Operate Crowdsec as IPS
- CrowdSec uses two API server, CAPI, LAPI.
- CAPI updates malicious IPs based on community decisions
- LAPI decides malicious attack based on log from its parser and scenario \(Suricata, caddy, etc\)
- LAPI decides malicious attack based on log from its parser and scenario (Suricata, caddy, etc)
- When CAPI, and LAPI decides block some IP based on log parsed by parser and scenarios, bouncer block the malicious accesses.
- Crowdsec register blacklist on nftables or iptables.
+3 -3
View File
@@ -20,7 +20,7 @@
- HashiCorp Vault or Infisical
- Very powerful, but introduces significant compute/memory overhead.
- Creates a "Secret Zero" problem for a single-node homelab environment because of dependency \(DB, or etc\).
- Creates a "Secret Zero" problem for a single-node homelab environment because of dependency (DB, or etc).
- It is hard to operate hardware separated key servers.
### Systemd-credential
@@ -37,10 +37,10 @@
## Decisions
- All secret data which has yaml format is encrypted by sops with age-key in `secret.yaml`.
- age-key is encrypted by gpg and ansible vault with master key \(including upper, lower case, number, special letters) above 40 characters.
- age-key is encrypted by gpg and ansible vault with master key (including upper, lower case, number, special letters) above 40 characters.
- All secret data always decrypt by `edit_secret.sh` script or ansible tasks from secrets.yaml using age-key encrypted by ansible-vault.
- decrypted secret data is always processed on ramfs, they are never saved on disk.
- Master key is never saved on disk, but only cold storage \(USB, M-DISC, operators' memory\)
- Master key is never saved on disk, but only cold storage (USB, M-DISC, operators' memory)
- The secret data will be saved on each servers specific directory or podman secret.
- OS:
- path: /etc/secrets
+23 -7
View File
@@ -6,7 +6,10 @@
- First documentation
- Feb/27/2026
- Status changed from Deffered to Accepted
- Status changed from Deferred to Accepted
- May/06/2026
- Add backup checking rules
## Status
@@ -14,7 +17,7 @@
## Context
- All configuration file is managed by git \(IaC\)
- All configuration file is managed by git (IaC)
- All data file should be backed up by kopia
- All backup should follow 3-2-1 backup cycle
@@ -30,20 +33,26 @@
- Backing up the `/var/lib/postgresql` directory directly while the DB is running can lead to severe data corruption and inconsistency.
- Logical dumps (`pg_dump`) are much safer, database-agnostic, and easier to restore in a homelab environment.
### Silent failure problem
- May/06/2026, the fact that backups haven't run since commit '9f236b6fa5' because of '%h' in system service unit.
- Operator couldn't realize backup doesn't run because the system service was failed silently.
- Therefore, set the checking rule.
## Decisions
- All configuration files are managed by Git
- Configuration files are based on text
- It is necessary to version, history management.
- Local git -> private Gitea -> github private project \(mirrored\)
- Local git -> private Gitea -> github private project (mirrored)
- This fulfills 3-2-1 backup rules
- Data files are managed by Kopia and DSM
- Local storage - kopia -> DSM's Kopia repository server - CloudSync -> Cloud server such as OneDrive or Google Drive
- This fulfills 3-2-1 backup rules
- Data files which needs backup
- DB data files: dump
- DB data files are located on infra:/home/infra/containers/postgresql/backups/\{cluster,$service\}/
- DB data files are located on infra:/home/infra/containers/postgresql/backups/{cluster,$service}/
- App data files: Photos, Media, etc ..
- App data files are located on app:/home/app/data/
- Backed up files: kopia
@@ -51,11 +60,18 @@
- Kopia over DSM configuration is managed by runbook with equivalent CLI commands due to vendor limitation
- Restore will be processed manually
- DB data files
- From kopia server to console:$HOMELAB_PATH/data/volume/infra/postgresql/\{cluster,data\}
- From kopia server to console:$HOMELAB_PATH/data/volume/infra/postgresql/{cluster,data}
- APP data files
- From kopia server to APP vm after initiating before deploy services
- Automative backup does not guarantee integrity of data system, so before reset the system conduct manual backup after making sure all services are shutdown.
- Automatic backup does not guarantee integrity of data system, so before reset the system conduct manual backup after making sure all services are shutdown.
- Check the repository once a week (Every monday)
- Check the snapshot in repository with `kopia snapshot list --all`
- Mount the snapshot respectively with `kopia mount $SNAPSHOT_ID $DESTINATION`
- Copy random file from snapshot and check the values.
- If there's some failure, check the backup service and conduct backup immediately.
- Repeat the check flow.
- When everything is done, umount the kopia mount with `ctrl+c`
## Consequences
- All files including configuration and data back ups will fulfill 3-2-1 \(3 Copies, 2 different media, 1 offsite\) back up rules
- All files including configuration and data back ups will fulfill 3-2-1 (3 Copies, 2 different media, 1 offsite) back up rules
+1 -1
View File
@@ -11,7 +11,7 @@
## Context
- App VM needs GPU for heavy workloads like Immich \(hardware transcoding and machine learning\)
- App VM needs GPU for heavy workloads like Immich (hardware transcoding and machine learning)
- App VM needs huge data storage for its own services
## Considerations
+2 -2
View File
@@ -18,7 +18,7 @@
### Hypervisor
- As a pure hypervisor, it should only operate virtualization for VM.
- Hypervisor just provides resources and dummy hub \(br\)
- Hypervisor just provides resources and dummy hub (br)
### VM
@@ -30,7 +30,7 @@
### Services
- Services should be distinguished based on their needs \(Privilege\)
- Services should be distinguished based on their needs (Privilege)
- Network stack, backup stack needs special privilege for low level ACL or networks.
- application stack doesn't need low level privilege usually
+1 -1
View File
@@ -27,7 +27,7 @@
- Removing
- Formatting
- Destroying
- Certificates and CA \([ADR-003](./003-pki.md)\)
- Certificates and CA ([ADR-003](./003-pki.md))
- Etc. what operator decides that is sensitive
## Consequences
+2 -2
View File
@@ -19,7 +19,7 @@
### Apply mTLS
- implementing mTLS needs both client certificate and server certificate
- Managing a number of certificates makes a huge operational burden \(expiry date, revocation, etc ..\)
- Managing a number of certificates makes a huge operational burden (expiry date, revocation, etc ..)
## Decisions
@@ -30,4 +30,4 @@
- The policy is set simple
- The overhead is increased little
- Exclude the exceptions on operation \(For the administrator\)
- Exclude the exceptions on operation (For the administrator)
+4 -3
View File
@@ -21,13 +21,14 @@
## Timeline
- 2026-03-21: Release actual budget
- 2026-03-21: Find the false positive case, and add whitelist
- 2026-05-07: Optimize whitelist expression
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add regex on whitelist
- evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/data/migrations/'
- Add expressions on whitelist
- evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status in ['200', '304'] && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/data/migrations/'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision list --id $ID`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+4 -3
View File
@@ -20,13 +20,14 @@
## Timeline
- 2026-03-21: Release Immich
- 2026-03-21: Find the false positive case, and add whitelist
- 2026-05-07: Optimize whitelist expression
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add regex on whitelist
- evt.Meta.target_fqdn == 'Immich.ilnmors.com' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'
- Add expressions on whitelist
- evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision list --id $ID`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+4 -3
View File
@@ -20,13 +20,14 @@
## Timeline
- 2026-04-04: Release OpenCloud
- 2026-04-04: Find the false positive case, and add whitelist
- 2026-05-07: Optimize whitelist expression
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add regex on whitelist
- evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/js/chunks/'
- Add expressions on whitelist
- evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status in ['200', '304'] && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/js/chunks/'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision list --id $ID`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+42
View File
@@ -0,0 +1,42 @@
# Nextcloud crowdsec false positive issue
## Status
- Finished
## Date
- 2026-05-02
## Version
- Nextcloud: 33.0.3
## Problem
- When users download or modify some files, all connections to homelab services are refused.
- fw ban users' IP address.
## Reason
- Nextcloud has a lot of workflows which can be caught from crowdsec
## Timeline
- 2026-05-02: Release nextcloud
- 2026-05-02: Find the false positive case, and add whitelist
- 2026-05-03: Install crowdsecurity/nextcloud-whitelist parser
- 2026-05-03: Make previous expressions annotation
- 2026-05-07: Find the false positive case, which is not on `crowdsecurity/nextcloud-whitelist`
- 2026-05-07: Set whitelist expression
## Solution
- Install crowdsecurity/nextcloud-whitelist on auth node
- Add expression on whitelist
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path startsWith '/index.php/core/preview?'
### Deprecated solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add expressions on whitelist
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/apps/viewer/js/'
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/dist/'
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/remote.php/dav/files/'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+6 -2
View File
@@ -22,7 +22,11 @@ When the migration is decided, the manual backup after shutting all services dow
Only when kopia repository exists.
```bash
kopia repository connect --override-username="console" --override-hostname="console.ilnmors.internal"
kopia repository connect server \
--url=https://nas.ilnmors.internal:51515 \
--override-username=console \
--override-hostname=console.ilnmors.internal
# Enter the password
kopia snapshot list --all
@@ -38,7 +42,7 @@ cp ~/workspace/homelab/volumes/infra/cluster/cluster.sql ~/workspace/homelab/con
### Provisioning
Ansible playbooks should be declarative. This won't contain complex branch logics \(Declarative over imperative\). Playbooks describes what should be there, not how to. The basic rule is manual destroy and auto reprovisioning.
Ansible playbooks should be declarative. This won't contain complex branch logics (Declarative over imperative). Playbooks describes what should be there, not how to. The basic rule is manual destroy and auto reprovisioning.
#### vmm and fw
+12 -7
View File
@@ -63,6 +63,10 @@ Set-Service DiagTrack -StartupType Disable
Stop-Service dmwappushservice
Set-Service dmwappushservice -StartupType Disable
## Disable - WorkloadsSessionHost which is the service for AI function
Stop-Service -Name "WSAIFabricSvc" -Force -ErrorAction SilentlyContinue; Set-Service -Name "WSAIFabricSvc" -StartupType Disabled
Stop-Process -Name "WorkloadsSessionHost" -Force -ErrorAction SilentlyContinue
## Compact OS configuration
compact /compactos:always
```
@@ -102,10 +106,10 @@ sign in on app only
- WindowsDefender Firewall:Inbound Rules:
- File and Printer Sharing (Echo Request - ICMPv4-In) - Profile: Private, Public
- General: \[x\] Enable
- General: `[x]` Enable
- Scope: 192.168.1.0/24, 192.168.10.0/24, 192.168.99.0/24
- File and Printer Sharing (Echo Request - ICMPv6-In) - Profile: Private, Public
- General: \[x\] Enable
- General: `[x]` Enable
- Scope: fd00::/8
- Apply
@@ -119,7 +123,8 @@ sign in on app only
### Create wsl config
- C:\Users\$USERNAME\.wslconfig
- `C:\Users\$USERNAME\.wslconfig`
```ini
[wsl2]
processors=4
@@ -201,13 +206,13 @@ mkdir ~/workspace
#### VS Code configuration
- WSL extension\(`Ctrl + shift + x`\)
- WSL extension(`Ctrl + shift + x`)
- Install `WSL` by Microsoft
- Remote Explorer:Debian:Connect in Current Windows
- `Ctrl + k` and `Ctrl + o`
- Open folder: `/home/console/workspace`
- `` Ctrl + shift + ` `` for Terminal
- Extensions\(`Ctrl + shift + x`\)
- Extensions(`Ctrl + shift + x`)
- Install `Ansible` by RedHat
### Playbooks
@@ -244,9 +249,9 @@ ansible-playbook playbooks/console/site.yaml --tags "init"
- encrypted by gpg and ansible vault with master key
- Master key
- The key which has above 40 characters containing upper and lower letters, numbers, and special letters
- managed by physical media \(Mind, MDisc, paper\) as file, string, and QR
- managed by physical media (Mind, MDisc, paper) as file, string, and QR
- This value is never saved in server or console.
- Root CA \(including ssh CA\) must not be deployed.
- Root CA (including ssh CA) must not be deployed.
- The tasks with root CA must be performed manually. The source of Trust is the most important in security.
- Intermediate CA can be deployed.
- Intermediate CA is operated as a live server.
+1 -1
View File
@@ -12,7 +12,7 @@ openssl rand -base64 32 > /run/user/$UID/root_ca_password
openssl rand -base64 32 > /run/user/$UID/intermediate_ca_password
# Save the values in `secrets.yaml`
# Create CAs \(Key and cert)
# Create CAs (Key and cert)
# Root CA
step certificate create \
"ilnmors.internal Root CA" /run/user/$UID/root_ca.crt /run/user/$UID/root_ca.key \
+10 -10
View File
@@ -1,4 +1,4 @@
# Hypervisor \(vmm\)
# Hypervisor (vmm)
Initiating hypervisor doesn't use ansible. Hypervisor is working on hardware itself, so there is a lot of possible variables like IOMMU id, MAC addresses, etc.
@@ -19,29 +19,29 @@ Hypervisor is initiated manually with the configuration files which are stored i
- Hostname: vmm
- Domain: ilnmors.internal
- User:
- Root Password: \[blank\]
- Root Password: `[blank]`
- Full name for the new user: vmm
- User Name: bootstrap
- User Password: debian
- Partition setting: manual
- 512MiB - EFI system partition \(Booting flag: on\)
- 1GiB - Ext4 Journaling \(Mount: /boot)
- 512MiB - EFI system partition (Booting flag: on)
- 1GiB - Ext4 Journaling (Mount: /boot)
- 800 GiB -LVM
- 64GiB: vmm-root - Ext4 Journaling \(Mount: /\)
- 700GiB: vmm-libvirt - Ext4 \(Mount: /var/lib/libvirt\)
- 64GiB: vmm-root - Ext4 Journaling (Mount: /)
- 700GiB: vmm-libvirt - Ext4 (Mount: /var/lib/libvirt)
- Debian package manager setting
- Scan extra installation media: no
- Mirror country: South Korea
- Archive mirror: deb.debian.org
- Proxy: \[blank\]
- Proxy: `[blank]`
- Popularity-contest: no
- Installing packages setting
- \[\*\] SSH server
- \[\*\] Standard system utilities
- `[*]` SSH server
- `[*]` Standard system utilities
### Initial configuration
Hypervisor operates pure L2 switch for fw and it never can access WAN without fw after initial configuration. This means, there is an air-gap which means hypervisor cannot access to WAN for a while \(from end of initial setting to the beginning of fw setting\).
Hypervisor operates pure L2 switch for fw and it never can access WAN without fw after initial configuration. This means, there is an air-gap which means hypervisor cannot access to WAN for a while (from end of initial setting to the beginning of fw setting).
Hypervisor operates on hardware. Hardware information is always uncertain, and it is set only once. Managing this process as IaC is over engineering.
+18 -18
View File
@@ -6,19 +6,19 @@ All hardware configuration is set after fw vm. The MAC address of hardware is re
### Access VLAN switch
- http://switch.ilnmors.internal \(192.168.1.2, KEA-DHCP, Only IPv4 support\)
- http://switch.ilnmors.internal (192.168.1.2, KEA-DHCP, Only IPv4 support)
- before set ipv6, use ip4 address instead of FQDN
- id: admin, password: admin
- new password: switch.password
### Set VLAN
- VLAN:802.1Q VLAN
- \[x\] Enable - Apply
- `[x]` Enable - Apply
- VLAN client
- id 1
- name default > client
- member \(Untagged\)
- Port 1 \(Trunk, untagged\): Linux bridge is already process untagged packet as id 1
- member (Untagged)
- Port 1 (Trunk, untagged): Linux bridge is already process untagged packet as id 1
- Port 3
- Port 4
- Port 5
@@ -29,13 +29,13 @@ All hardware configuration is set after fw vm. The MAC address of hardware is re
- id 10
- name server
- member
- Port 1 \(Trunk, tagged\)
- Port 1 (Trunk, tagged)
- VLAN user
- id 20
- name user
- member
- Port 1 \(Trunk, tagged\)
- Port 2 \(Not a member of client vlan, untagged\)
- Port 1 (Trunk, tagged)
- Port 2 (Not a member of client vlan, untagged)
- VLAN:802.1Q VLAN PVID setting
- Port 2
@@ -48,9 +48,9 @@ All hardware configuration is set after fw vm. The MAC address of hardware is re
- Check internet connection
## DSM \(DS124\)
## DSM (DS124)
- https://finds.synology.com/# \(192.168.1.11, KEA-DHCP\)
- https://finds.synology.com/# (192.168.1.11, KEA-DHCP)
- Install DSM
### Initial configuration
@@ -83,7 +83,7 @@ Kea in fw already reserved DSM's IP. However it is necessary to set IP address s
- Certificate
- Intermediate certificate
- Edit: For: Set as default certificate
- Setting \(!CAUTION!\)
- Setting (!CAUTION!)
- Even though you set the certificate as default, you have to set certificate for each services.
- configure: service: certificate: nas.ilnmors.internal
@@ -92,20 +92,20 @@ Kea in fw already reserved DSM's IP. However it is necessary to set IP address s
- **!CAUTION!** It can be set after authelia is implemented
- Following [here](../../config/services/containers/auth/authelia/config/authelia.yaml.j2) for Authelia configuration
- Control Panel:Domain/LDAP:SSO Client
- Login Settings: \[x\] Select SSO by default on the login page
- Login Settings: `[x]` Select SSO by default on the login page
- Services
- \[x\] Enable OpenID Connect SSO service
- `[x]` Enable OpenID Connect SSO service
- OpenID Connect SSO Settings
- Profile: OIDC
- Account type: Domain/LDAP/local
- Name: Authelia
- Well-Known URL: https://authelia.ilnmors.com/.well-known/openid-configuration
- Application ID: dsm \(what you designated\)
- Application ID: dsm (what you designated)
- Application Secret: secret value
- Redirect URI: https://nas.ilnmors.internal:5001
- Authorization scope: openid profile groups email
- Username claim: preferred_username
- Match the user name \(ID\) in DSM and lldap id.
- Match the user name (ID) in DSM and lldap id.
### Kopia in DSM
@@ -123,15 +123,15 @@ Kea in fw already reserved DSM's IP. However it is necessary to set IP address s
- Add certificate - DSM reverse proxy cannot deal with gRPC
- /docker/kopia/config/ssl/nas.key
- /docker/kopia/config/ssl/nas.crt \(including intermediate crt\)
- /docker/kopia/config/ssl/nas.crt (including intermediate crt)
- container manager:images:import
- kopia/kopia
- tags: \{\{ version['packages']['kopia'] \}\}
- tags: {{ version['packages']['kopia'] }}
- run
- image: kopia/kopia
- containername: kopia-server
- \[x\] Enable auto restart
- `[x]` Enable auto restart
- port: 51515:51515
- volume: /docker/kopia/config:/app/config:rw
- volume: /docker/kopia/cache:/app/cache:rw
@@ -159,7 +159,7 @@ Repository directory - encrypted by server KOPIA_PASSWORD as master key of repos
Server manage ACL with user password, user's KOPIA_PASSWORD. When server verify user with their password, server works with its repository password.
Repository - \(Repository key; master key\) - Server - \(User key; access key\) - Client
Repository - (Repository key; master key) - Server - (User key; access key) - Client
- Client knows its access password as KOPIA_PASSWORD to access server. It doesn't know master key, server's KOPIA_PASSWORD. server will control repository by its KOPIA_PASSWORD. their name is the same but it is different.
+3 -3
View File
@@ -8,11 +8,11 @@
# *! CAUTION !*
# THIS PROCESS CONTAINING SECRET VALUES.
# WHEN YOU TYPE THE COMMAND ON SHELL, YOU MUST USE [BLANK] BEFORE COMMAND
# WHEN YOU TYPE THE COMMAND ON SHELL, YOU MUST USE [blank] BEFORE COMMAND
# e.g.
# shell@shell$ command (X)
# shell@shell$ [BLANK]command (O)
# BLANK prevent the command to save on .bash_history
# shell@shell$ [blank]command (O)
# [blank] prevent the command to save on .bash_history
# After finish this process, use `history -c` and `clear` for just in case.
+11
View File
@@ -70,6 +70,13 @@ git stash pop # get temporary save
# After git switch
git switch service
git rebase --ignore-date main # set date as current time on main branch
# Example
git show HEAD
git show HEAD~$NUM
git diff HEAD~$NUM HEAD
git diff $PREVIOUS_HASH $CURRENT_HASH
```
## Add Service with git
@@ -90,6 +97,10 @@ git merge caddy-app
- Set this after gitea is implemented
```bash
# Copy git from remote
git clone --mirror https://gitea.ilnmors.com/il/ilnmors-homelab.git
# If the tag doesn't come then
git fetch --tags
# Add git remote repository
git config --global credential.helper store
git remote add origin https://gitea.ilnmors.com/il/ilnmors-homelab.git
+5 -5
View File
@@ -54,8 +54,8 @@ CREATE EXTENSION IF NOT EXISTS vector;
### About community edition limitation
- Workspace seats
- The number of members itself \(account\) are unlimited.
- However the number of members who work on the same workspace simultaneously \(seats\) are designated as 10 members.
- The number of members itself (account) are unlimited.
- However the number of members who work on the same workspace simultaneously (seats) are designated as 10 members.
- Workspace storage quota
- Originally, self-hosted version has no limitation in storage quota and uploading file size.
- Now, there is some limitation even in the self-hosted version.
@@ -85,8 +85,8 @@ CREATE EXTENSION IF NOT EXISTS vector;
#### Auth
- [ ] Whether allow new registrations
- [x] Whether allow new registration via configured oauth
- `[ ]` Whether allow new registrations
- `[x]` Whether allow new registration via configured oauth
- Minimum length requirement of password: 8
- Maximum length requirement of password: 50
- save
@@ -117,5 +117,5 @@ Environment="AFFINE_SERVER_HTTPS=true"
#### Flags
- [x] Whether allow guest users to create demo workspaces
- `[x]` Whether allow guest users to create demo workspaces
- save
+28
View File
@@ -0,0 +1,28 @@
# Collabora office
## Prerequisite
- Nothing
## Configuration
- Admin page is disabled by Environment options
- `admin_console.enable=false`
### Link to nextcloud
- https://nextcloud.ilnmors.com
- login with admin account
- Profile: Apps: Nextcloud Office
- Check installation and enable
- Profile: Administration Settings: Nextcloud Office: Your own server
- http://host.containers.internal:9980 (collabora container port)
- Public FQDN is set automatically
- save
- Files
- Verify document opening (verified)
- The basic font `Noto Sans KR` exists
- Korean is presented very well
+35
View File
@@ -0,0 +1,35 @@
# ezBookkeeping
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.ezbookkeeping`
- Access infra server to create paperless_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER ezbookkeeping WITH PASSWORD 'postgresql.password.ezbookkeeping';
CREATE DATABASE ezbookkeeping_db;
ALTER DATABASE ezbookkeeping_db OWNER TO ezbookkeeping;
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'ezbookkeeping.oidc.secret'`
- Save this value in secrets.yaml in `ezbookkeeping.oidc.secret` and `ezbookkeeping.oidc.hash`
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "ezbookkeeping"
```
+1 -1
View File
@@ -61,7 +61,7 @@ CREATE EXTENSION IF NOT EXISTS earthdistance CASCADE;
- map
- version check
- User privacy
- google cast \(disable\)
- google cast (disable)
- Storage template
- `{{y}}/{{MM}}/{{y}}{{MM}}{{dd}}_{{hh}}{{mm}}{{ss}}`
- Backups
+106
View File
@@ -0,0 +1,106 @@
# Nextcloud
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.nextcloud`
- Access infra server to create nextcloud_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER nextcloud WITH PASSWORD 'postgresql.password.nextcloud';
CREATE DATABASE nextcloud_db;
ALTER DATABASE nextcloud_db OWNER TO nextcloud;
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'nextcloud.oidc.secret'`
- Save this value in secrets.yaml in `nextcloud.oidc.secret` and `nextcloud.oidc.hash`
### Create admin password
- Create the secret with `openssl rand -base64 32`
- Save this value in secrets.yaml in `nextcloud.admin-local.password`
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "nextcloud"
```
## Configuration
### Access
- https://nextcloud.ilnmors.com
- login with admin-local
### Disable and enable apps
- Profile: Apps: Your apps: Disable
- Photo
- dashboard
- Profile: Apps: Search
- OpenID Connect user backend
- Calendar
- Contacts
- Deck
- Tasks
- Mail
- Nextcloud Office
### OIDC and DB Configuration
```bash
podman exec -u www-data nextcloud php occ user_oidc:provider Authelia \
--clientid="nextcloud" \
--clientsecret="nextcloud.oidc.secret" \
--discoveryuri="https://authelia.ilnmors.com/.well-known/openid-configuration" \
--scope="openid profile email groups" \
--unique-uid=0 \
--mapping-uid="preferred_username" \
--mapping-display-name="name" \
--mapping-email="email" \
--mapping-groups="groups" \
--group-whitelist-regex="/^users$/" \
--group-provisioning=1
podman exec -u www-data nextcloud php occ db:add-missing-indices
podman exec -u www-data nextcloud php occ db:add-missing-columns
podman exec -u www-data nextcloud php occ db:add-missing-primary-keys
```
### Account configuration
- Profile: Accounts:
- allocate admin group for admin users
#### Disable System addressbook expose
- Profile: Administration Settings: Groupware: System Address Book
- Disable `Enable system address book` option
## Security warning in Nextcloud (ignored)
### trusted_proxies option
- Nextcloud wants admin to set `trusted_proxies` via forwarded ip header.
- In current system, app vm explicitly prevents access the nextcloud container outside of vm.
- trusted_proxy ip address will be definitely 169.254.1.2 (caddy's APIPA address which is used in PASTA network), so it is not distinguished from other containers.
- Therefore, it doesn't need to be set.
### HSTS option
- This system is already main - sidecar reverse proxy system, and main proxy automatically changes http requests to https request (Caddyfile listens https).
- main - sidecar communication is also on https via internal certificate.
- Therefore, it doesn't need to be set.
+2 -2
View File
@@ -13,8 +13,8 @@
## Configuration
- **!CAUTION!** OpenCloud application \(Android, IOS, Desktop\) doesn't support standard OIDC. Every scopes and client id is hardcoded.
- WEBFINGER_\[DESKTOP|ANDROID|IOS\]_OIDC_CLIENT_ID, WEBFINGER_\[DESKTOP|ANDROID|IOS\]_OIDC_CLIENT_SCOPES don't work on official app.
- **!CAUTION!** OpenCloud application (Android, IOS, Desktop) doesn't support standard OIDC. Every scopes and client id is hardcoded.
- `WEBFINGER_[DESKTOP|ANDROID|IOS]_OIDC_CLIENT_ID`, `WEBFINGER_[DESKTOP|ANDROID|IOS]_OIDC_CLIENT_SCOPES` don't work on official app.
- It is impossible to set group claim in scopes. Therefore, it is hard to control roles with token including group claim.
- When authelia doesn't work, annotate `OC_EXCLUDE_RUN_SERVICES=idp` and restart to container to use local admin.
- This app doesn't support regex on role_assignment mapping.
+2 -2
View File
@@ -67,8 +67,8 @@ ALTER DATABASE paperless_db OWNER TO paperless;
- Mode: skip
- When the archive file has broken ocr text, then conduct replcae command manually
- Skip archive File: never
- Deskew: disable \(toggle to enable and once more to active disable option\)
- rotate: disable \(toggle to enable and once more to active disable option\)
- Deskew: disable (toggle to enable and once more to active disable option)
- rotate: disable (toggle to enable and once more to active disable option)
## The non-standard pdf file
+67
View File
@@ -0,0 +1,67 @@
# sure
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.sure`
- Access infra server to create sure_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER sure WITH PASSWORD 'postgresql.password.sure';
CREATE DATABASE sure_db;
ALTER DATABASE sure_db OWNER TO sure;
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'sure.oidc.secret'`
- Save this value in secrets.yaml in `sure.oidc.secret` and `sure.oidc.hash`
### Create session secret value
- Create the secret with `LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32`
- Save this value in secrets.yaml in `sure.session_secret`
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "sure"
```
## Configuration
### Access to sure
- https://sure.ilnmors.com
- Sign in with Authelia
- Create account
### Account configuration
- Setup:
- First name and last name
- Will be using sure with
- `[x]` Family members
- Country: South Korea
- Preference:
- South Korean Won (KRW)
- date: YYYY-MM-DD
- Goals:
- Next
### Group and user configuration
- Profile: Settings: Profile info:
- Household: Set name
- Invite member: input email of member
+33
View File
@@ -0,0 +1,33 @@
# trilium
## Prerequisite
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'trilium.oidc.secret'`
- Save this value in secrets.yaml in `trilium.oidc.secret` and `trilium.oidc.hash`
## Configuration
### Access
- https://notes.ilnmors.com
- `[x]` I'm a new user, and I want to create a new Trilium document for my notes
- Next
- Password configuration
- local password login
### OIDC
- Menu: Options: MFA
- `[x]` Enable MFA
- `[x]` OAuth/OpenID
- logout
- Authelia
### about ERRORS
- This is so unstable to use, especially OIDC is one of terrible experience.
+106
View File
@@ -0,0 +1,106 @@
# wiki.js
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.wikijs`
- Access infra server to create wikijs_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER wikijs WITH PASSWORD 'postgresql.password.wikijs';
CREATE DATABASE wikijs_db;
ALTER DATABASE wikijs_db OWNER TO wikijs;
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'wikijs.oidc.secret'`
- Save this value in secrets.yaml in `wikijs.oidc.secret` and `wikijs.oidc.hash`
- !CAUTION! Don't update authelia with ansible-playbook before configuration
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "wikijs"
```
## Configuration
### Access
- https://wiki.ilnmors.com
- Administrator Email: admin@wiki.ilnmors.internal
- Password: wikijs.il.password
- Site URL: https://wiki.ilnmors.com
- INSTALL
### Group configuration
- Administration: Groups: Guests: PERMISSIONS
- Remove all permissions
- Administration: Groups: NEW GROUP
- Users
- Administration: Groups: Users: PERMISSIONS
- Grant all permission in CONTENT
- Administration: Groups: Users: PAGE RULES
- Allow / Deny: Allow
- Match: Path starts with
- Path: empty value
- Locale: Any / All
- Permissions:
- Grant all permission
- Update Group
### OIDC configuration
- Administration: Modules: Authentication
- Add Strategy: Generic OpenID Connect / OAuth2
- Display Name: Authelia
- client id: wikijs
- client secret: wikijs.oidc.secret
- Authorization Endpoint URL: https://authelia.ilnmors.com/api/oidc/authorization
- Token Endpoint URL: https://authelia.ilnmors.com/api/oidc/token
- User info Endpoint URL: https://authelia.ilnmors.com/api/oidc/userinfo
- Skip User Profile: untoggled
- Issure: https://authelia.ilnmors.com
- Email Claim: email
- Display Name Claim: displayName
- Picture Claim: picture
- Map Groups: untoggled
- Groups Claim: groups
- Registration: Allow self-registration: toggled
- Assign to group: Users
- Check: Callback URL / Redirect URI
- Apply
- add Callback URL / Redirect URI to [authelia config](../../../config/services/containers/auth/authelia/config/authelia.yaml.j2)
- update authelia
- logout from administrator
- login: Select Authentication Provider: Authelia
### Storage
- Administration: Modules: Stroage
- Local File System
- Path: /wiki/export
- Apply
### Locale
- Administration: Site: Locale
- Download what you needs.
- Korean, Arabic, French ...
+1 -1
View File
@@ -2,7 +2,7 @@
## Communication
Alloy runs on systemd \(host\), and postgresql runs as container \(rootless podman\). When host system and container communicate, container recognizes host system as host-gateway \(Link local address\).
Alloy runs on systemd (host), and postgresql runs as container (rootless podman). When host system and container communicate, container recognizes host system as host-gateway (Link local address).
## postgresql monitor
+2 -2
View File
@@ -6,10 +6,10 @@ This is not a perfect E2EE communication theorogically, however technically it i
### .com public domain
WAN - \(Let's Encrypt certificate\) -> Caddy \(auth\) - \(ilnmors internal certificate\) -> Caddy \(app\) or https services - http -> app's local service
WAN - (Let's Encrypt certificate) -> Caddy (auth) - (ilnmors internal certificate) -> Caddy (app) or https services - http -> app's local service
### .internal private domain
client - \(ilnmors internal certificate\) -> Caddy \(Infra\) - http -> local services
client - (ilnmors internal certificate) -> Caddy (Infra) - http -> local services
### DNS record
+13 -13
View File
@@ -3,16 +3,16 @@
## LAPI
### Detecting
Host logs \> CrowdSec Agent\(parser\) > CrowdSec LAPI
Host logs > CrowdSec Agent(parser) > CrowdSec LAPI
### Decision
CrowdSec LAPI \(Decision + Register\)
CrowdSec LAPI (Decision + Register)
### Block
CrowdSec LAPI \> CrowdSec Bouncer \(Block\)
CrowdSec LAPI > CrowdSec Bouncer (Block)
## CAPI
CrowdSec CAPI \> crowdsec LAPI \(local\) \> CrowdSec Bouncer \(Block\)
CrowdSec CAPI > crowdsec LAPI (local) > CrowdSec Bouncer (Block)
## Ansible Deployment
@@ -20,34 +20,34 @@ CrowdSec CAPI \> crowdsec LAPI \(local\) \> CrowdSec Bouncer \(Block\)
- Deploy fw's config.yaml
- Deploy crowdsec certificates
- Register machines \(Agents\)
- Register bouncers \(Bouncers\)
- Register machines (Agents)
- Register bouncers (Bouncers)
### Set Bouncer (fw/roles/tasks/set_crowdsec_bouncer.yaml)
- Deploy crowdsec-firewall-bouncer.yaml
- Install suricata collection \(parser\) with cscli
- Install suricata collection (parser) with cscli
- Set acquis.d for suricata
- set-only: bouncer can't get metrics from the chain and rules count result which it doesn't make. - It means, it is impossible to use prometheus metric with set-only true option.
- chain or rules matched count reasults are able to check on nftables.
- use sudo nft list chain inet filter global to check packet blocked. \(counter command is required\)
- use sudo nft list chain inet filter global to check packet blocked. (counter command is required)
### Set Machines; agents (common/tasks/set_crowdsec_agent.yaml)
- Deploy config.yaml except fw \(disable LAPI, online_api_credentials\)
- Deploy config.yaml except fw (disable LAPI, online_api_credentials)
- Deploy local_api_credentials.yaml
### Set caddy host (auth/tasks/set_caddy.yaml)
- Set caddy CrowdSec module
- Set caddy log directory
- Install caddy collection \(parser\) with cscli
- Install caddy collection (parser) with cscli
- Set acquis.d for caddy
### Set whitelist (/etc/crowdsec/parser/s02-enrich/whitelists.yaml)
- Set only local console IP address
- This can block local VM to the other subnet, but the communication between vms is possible because they are in the same subnet\(L2\) - packets don't pass the fw.
- This can block local VM to the other subnet, but the communication between vms is possible because they are in the same subnet(L2) - packets don't pass the fw.
- Crowdsec bouncer only conducts blocks forward chain which pass Firewall, it is blocked by crowdsec bouncer based on lapi
## Test
@@ -234,9 +234,9 @@ fw@fw:~$ sudo cscli alerts inspect 230 -d
- check the log and analyze and make expression
- e.g. immich
- evt.Meta.target_fqdn == 'immich.ilnmors.com' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'
- "evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'"
- e.g. opencloud
- "evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/js/chunks/'"
- "evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status in ['200', '304'] && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/js/chunks/'"
- free false positive decision
fw@fw:~$ sudo cscli decision list
+1 -1
View File
@@ -10,5 +10,5 @@ Kopia saves all information, even the users and policies on repository. Reposito
When kopia is run as a kopia server, client can access to server with user and user password. The clients don't have to know master password. Kopia server decrypt the repository with the master password, and the client just access to the kopia server with their user account.
Repository \<- Master password -\> Kopia server \<- User password -\> Kopia client
Repository <- Master password -> Kopia server <- User password -> Kopia client
+9 -9
View File
@@ -3,20 +3,20 @@
## IPv4
### Subnet management
- Static subnet \(manage without dhcp\)
- client \(for ipv4, set reservation\)
- Static subnet (manage without dhcp)
- client (for ipv4, set reservation)
- server
- Dynamic subnet \(manage with dhcp\)
- Dynamic subnet (manage with dhcp)
- user
## IPv6
### Subnet management
- Static subnet \(manage without RA - specific defination\)
- client \(Designated ULA with NAT66\)
- server \(Designated ULA with NAT66\)
- Dynamic subnet \(manage with RA and SLAAC\)
- user \(Autogenerated GUA\)
- Static subnet (manage without RA - specific defination)
- client (Designated ULA with NAT66)
- server (Designated ULA with NAT66)
- Dynamic subnet (manage with RA and SLAAC)
- user (Autogenerated GUA)
## Firewall policy for each subnet
@@ -26,4 +26,4 @@ Make polices based on each specific designated IP address for nodes.
### Dynamic subnet
Make polices based on subnet \(or interface itself\)
Make polices based on subnet (or interface itself)
+2 -2
View File
@@ -142,5 +142,5 @@ podman exec -it ca step ca certificate test.com test.crt test_key --provisioner
### Firefox
- Setting - Security - view certificates - Authority - add
- \[x\] trust this ca to identify website
- \[x\] trust this ca to identify email users
- `[x]` trust this ca to identify website
- `[x]` trust this ca to identify email users
+5 -5
View File
@@ -2,14 +2,14 @@
## Operation
Refer to Ansible playbook
\(Postgresql user and DB is needed\)
\(LDAP strict readonly account is needed\)
(Postgresql user and DB is needed)
(LDAP strict readonly account is needed)
## Verification
- Check Caddyfile \(without caddy, use 3000 ports\)
- Check Caddyfile (without caddy, use 3000 ports)
- https://grafana.ilnmors.internal
- login with LDAP user
- connection:data sources: \[prometheus|loki\]: provisioned
- connection:data sources: `[prometheus|loki]`: provisioned
- https://prometheus.ilnmors.internal:9090
- https://loki.ilnmors.internal:3100
@@ -17,4 +17,4 @@ Refer to Ansible playbook
## Dashboard
- Dashboard isn't saved on local directory. They are saved on DB \(Postgresql\).
- Dashboard isn't saved on local directory. They are saved on DB (Postgresql).
+13 -13
View File
@@ -1,6 +1,6 @@
## Operation
Refer to Ansible playbook
\(Postgresql user and DB is needed\)
(Postgresql user and DB is needed)
Integrate configuration with various app: https://github.com/lldap/lldap/blob/main/example_configs
@@ -8,7 +8,7 @@ Integrate configuration with various app: https://github.com/lldap/lldap/blob/ma
### DB URL
Jinja2 `urlencode` module doesn't replace `/` as `%2F`. replace('/', '%2F') is necessary.
ex\) {{ var | urlencode | replace('/', '%2F') }}
ex) {{ var | urlencode | replace('/', '%2F') }}
### Reset administrator password
@@ -28,56 +28,56 @@ systemctl --user restart ldap.service
### Access web UI and Login
- URL: http://ldap.ilnmors.internal:17170 \(This is temporary access way before Caddy, which is reverse proxy, is set)
- URL: http://ldap.ilnmors.internal:17170 (This is temporary access way before Caddy, which is reverse proxy, is set)
- ID: admin
- PW: $LLDAP_LDAP_USER_PASSWORD
### Create the groups
- Groups - \[\+\] Create a group
- Groups - `[+]` Create a group
- Group: admins
- Group: users
It is necessary to manage ACL via authelia based on groups.
### Create the authelia user for OCID \(OP\)
### Create the authelia user for OCID (OP)
- Users: \[\+\] Create a user
- Users: `[+]` Create a user
- Username (cn; uid): authelia
- Display name: Authelia
- First Name: Authelia
- Last Name (sn): Service
- Email (mail): authelia@ilnmors.internal
- Password: "$(openssl rand -base64 32)"
- Groups:lldap_strict_readonly: \[Add to group\]
- Groups:lldap_strict_readonly: `[Add to group]`
- This group allow search authority.
- Users: \[\+\] Create a user
- Users: `[+]` Create a user
- Username (cn; uid): grafana
- Display name: Grafana
- First Name: Grafana
- Last Name (sn): Service
- Email (mail): grafana@ilnmors.internal
- Password: "$(openssl rand -base64 32)"
- Groups:lldap_strict_readonly: \[Add to group\]
- Groups:lldap_strict_readonly: `[Add to group]`
- This group allow search authority.
> Save the password in .secret.yaml
### Create the normal users
- Users: \[\+\] Create a user
- Users: `[+]` Create a user
- Username (cn; uid): il
- First Name: Il
- Last Name (sn): Lee
- Email (mail): il@ilnmors.internal
- Password: "$PASSWORD"
- Groups:lldap_admin&admins&users: \[Add to group\]
- Users: \[\+\] Create a user
- Groups:lldap_admin&admins&users: `[Add to group]`
- Users: `[+]` Create a user
- Username (cn; uid): user
- First Name: John
- Last Name (sn): Doe
- Email (mail): john_doe@ilnmors.internal
- Password: "$PASSWORD"
- Groups:(admins|users): \[Add to group\]
- Groups:(admins|users): `[Add to group]`
> Custom schema in `User schema`, `Group schema` doesn't need to be added. This is for advanced function to add additional value such as `identity number` or `phone number`. Hardcoded schema, which means basic schema the lldap provides is enough to use Authelia.
+1 -1
View File
@@ -3,7 +3,7 @@
## Operation
Refer to Ansible playbook
## Verification
- fw@fw:/var/lib/bind$ curl -k https://loki.ilnmors.internal:3100/ready \(Node which is in NET_SERVER except infra itself\)
- fw@fw:/var/lib/bind$ curl -k https://loki.ilnmors.internal:3100/ready (Node which is in NET_SERVER except infra itself)
- ready
- fw@fw:/var/lib/bind$ curl -k https://loki.ilnmors.internal:3100/metrics
- metrics lists
+38 -2
View File
@@ -37,14 +37,14 @@ podman exec -it -u postgres postgresql "psql -U postgres"
> \l
> \q
# Restor database (manually)
# Restore database (manually)
podman exec -u postgres postgresql "psql -U postgres -f $POSTGRESQL_BACKUP_PATH_IN_CONTAINER/script.sql"
# Backup service executes
systemctl --user start postgresql-cluster-backup.service
# Stop and remove all data
systemctl --stop postgresql
systemctl --user stop postgresql
sudo find "/home/infra/data/containers/postgresql/data" -mindepth 1 -delete
# Restore database
@@ -62,3 +62,39 @@ postgres=# SHOW shared_preload_libraries;
vchord.so
(1 row)
```
## Update and upgrade version
### Update version
#### Prerequisite
- Shutdown all related services on [infra, auth, app] vms.
- [service list](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
- `systemctl --user stop $SERVICE`
- Run backup service unit on infra vm.
- `systemctl --user start postgresql-cluster-backup.service`
- `systemctl --user start postgresql-data-backup@$SERVICE.service`
- Modify postgresql and extension version and run ansible playbook
- [version info](../../../ansible/inventory/group_vars/all.yaml)
- `ansible-playbook playbooks/infra/site.yaml --tags "postgresql"`
- Check postgresql container and update extension
```postgresql
# immich example
# extension should be checked on each database which needs the extension
\c immich_db
\dx
# check the installed_version and default_version
ALTER EXTENSION vchord UPDATE;
REINDEX INDEX face_index;
REINDEX INDEX clip_index;
```
- Run playbook to start all services
- `ansible-playbook playbooks/[infra, auth, app]/site.yaml --tags "site"`
- Check all services
+1 -1
View File
@@ -3,7 +3,7 @@
## Operation
Refer to Ansible playbook
## Verification
- Check Caddyfile \(without caddy, use 9090 ports\)
- Check Caddyfile (without caddy, use 9090 ports)
- https://prometheus.ilnmors.internal
- Status:Target Health
- Check `Endpoint localhost:9090 ` with green circle
+5 -5
View File
@@ -4,7 +4,7 @@
- link file
Link file links hardware interface and kernel while booting
- netdev file
netdev file defines virtual interface \(port, bridge\)
netdev file defines virtual interface (port, bridge)
- network file
network file defines network option above interfaces
@@ -12,7 +12,7 @@
- reload
- networkctl reload
- networkctl reconfigure \[interface name\]
- networkctl reconfigure [interface name]
## references
@@ -24,10 +24,10 @@
## Plans
- Hypervisor's linux bridges work as L2 switch
- br0 is completely L2 switch \(LinkLocalAddressing=no\)
- br0 is completely L2 switch (LinkLocalAddressing=no)
- br1 has ip address for hypervisor itself, but basically works as L2 switch whitch can deal with VLAN tags; id=1,10
- Firewall's port \(wan\) works as Gateway which can conduct NAT
- Firewall's port \(clients\) works as trunk port which can deal with VLAN tags; id=1,10,20
- Firewall's port (wan) works as Gateway which can conduct NAT
- Firewall's port (clients) works as trunk port which can deal with VLAN tags; id=1,10,20
- Firewall's port
- client, id = 1
- server, id = 10
+1 -1
View File
@@ -4,7 +4,7 @@ Quadlet is for defining container configuration and lifecycle combining systemd
## Rootless container
Containers should be isolated from host OS. However, docker runs with root permission on daemon \(dockerd\). This means when one docker container has vulnerability and it is taken over, all the host system authority is threatened. Rootless container, podman runs without root permission and daemon so that even if one of containers is taken over, prevent the damage in host's normal user authority.
Containers should be isolated from host OS. However, docker runs with root permission on daemon (dockerd). This means when one docker container has vulnerability and it is taken over, all the host system authority is threatened. Rootless container, podman runs without root permission and daemon so that even if one of containers is taken over, prevent the damage in host's normal user authority.
Rootless container maps UID/GID between host and its own following namespace. Host's user UID/GID is mapped with container's root, and host's subuid/subgid defined on `/etc/subuid`, `/etc/subgid` is mapped with container's user UID/GID by default.
+52 -40
View File
@@ -2,38 +2,38 @@
## Console
- OS: WSL2 \(Debian 13\)
- OS: WSL2 (Debian 13)
- Processor: 4vCPU
- Memory: 4GiB
- Disk:
- 32GiB for `/` \(VHD file\)
- 32GiB for `/` (VHD file)
- Services:
- [x] Terminal
- [x] Step-CLI
- [x] Ansible
- Git
- Kopia
- [x] Git
- [x] Kopia
- [x] cloud-image-utils
## vmm \(Hypervisor\)
## vmm (Hypervisor)
- OS: Debian13
- Processor: pCPU \(N150\)
- Memory: 3GiB \(margin\)
- Processor: pCPU (N150)
- Memory: 3GiB (margin)
- KSM allows more than 3GiB for vmm
- MAC:
- c8:ff:bf:05:aa:b0
- c8:ff:bf:05:aa:b1
- Disk:
- SSD:
- 64GiB for `/` \(ext4 in LVM\)
- 700GiB for `/var/lib/libvirt` \(ext4 in LVM\)
- 64GiB for `/` (ext4 in LVM)
- 700GiB for `/var/lib/libvirt` (ext4 in LVM)
- Services:
- [x] QEMU/KVM
- [x] libvirtd
- [x] ksmtuned
## fw \(Firewall\)
## fw (Firewall)
- OS: Debian13
- Processor: 2vCPU
@@ -43,20 +43,20 @@
- 0a:49:6e:4d:00:00
- 0a:49:6e:4d:00:01
- Disk:
- SSD: 64GiB for `/` \(ext4 in qcow2 file\)
- SSD: 64GiB for `/` (ext4 in qcow2 file)
- Services:
- native packages:
- [x] nftables \(firewall based on ZONE\)
- [x] Suricata \(IDS\)
- [x] CrowdSec LAPI \(IPS\)
- [x] nftables (firewall based on ZONE)
- [x] Suricata (IDS)
- [x] CrowdSec LAPI (IPS)
- [x] Kea DHCP
- [x] Wireguard-tool
- [x] BIND9 \(Local authoritative DNS\)
- [x] Blocky \(Resolver DNS\)
- [x] BIND9 (Local authoritative DNS)
- [x] Blocky (Resolver DNS)
- Scripts:
- [x] ddns.sh
## infra \(Infrastructure\)
## infra (Infrastructure)
- OS: Debian13
- Processor: 2vCPU
@@ -64,15 +64,15 @@
- Memory: 6GiB
- MAC: 0a:49:6e:4d:01:00
- Disk:
- SSD: 256GiB for `/` \(ext4 in qcow2 file\)
- SSD: 256GiB for `/` (ext4 in qcow2 file)
- Services:
- Rootless containers:
- [x] PostgreSQL
- [x] lldap
- [x] Step-CA
- [x] Caddy \(with nsupdate\)
- [x] Prometheus \(alloy - push\)
- [x] Loki \(alloy\)
- [x] Caddy (with nsupdate)
- [x] Prometheus (alloy - push)
- [x] Loki (alloy)
- [x] Grafana
<!--
Mail service is not needed, especially Diun is not needed.
@@ -80,12 +80,12 @@
- Dovecot
- mbsync
- Diun
- Study \(Rootless container\):
- Study (Rootless container):
- Kali
- Debian
-->
## auth \(Authorization\)
## auth (Authorization)
- OS: Debian13
- Processor: 2vCPU
@@ -93,13 +93,13 @@
- Memory: 2GiB
- MAC: 0a:49:6e:4d:02:00
- Disk:
- SSD: 64GiB for `/` \(ext4 in qcow2 file\)
- SSD: 64GiB for `/` (ext4 in qcow2 file)
- Services:
- Rootless containers:
- [x] Caddy \(with nsupdate, crowdsec-http, crowdsec-bouncer module\)
- [x] Caddy (with nsupdate, crowdsec-http, crowdsec-bouncer module)
- [x] authelia
## app \(Application\)
## app (Application)
- OS: Debian13
- Processor: 4vCPU
@@ -107,9 +107,9 @@
- Memory: 16GiB
- MAC: 0a:49:6e:4d:03:00
- Disk:
- SSD: 256GiB for `/` \(ext4 in qcow2 file\)
- HDD: 4TB for `/home/app/data` \(btrfs\)
- VFIO \(Hardware passthrough):
- SSD: 256GiB for `/` (ext4 in qcow2 file)
- HDD: 4TB for `/home/app/data` (btrfs)
- VFIO (Hardware passthrough):
- Graphic: N150 iGPU
- Disk: SATA Controller
- Services:
@@ -119,14 +119,26 @@
- [x] Immich
- [x] Actual budget
- [x] Paperless-ngx
- [x] vikunja - When affine is verified to substitute kanban board and etc, then disable this service.
- [x] OpenCloud
- [x] affine \(Notion substitution\)
- [ ] Radicale
- [ ] Collabora office
- WriteFreely
- MediaCMS
- Funkwhale
- [x] vikunja (Comparing to Nextcloud deck)
- [x] OpenCloud (Comparing to Nextcloud)
- [x] affine (Notion substitution)
- [x] Nextcloud (Use nextcloud as CalDAV and CardDav, kanban and todo)
- [x] Collabora office (Link to Nextcloud, it works well)
- [x] ezBookkeeping
- use budget.ilnmors.com for ezBookkeeping, actual budget domain is changed as actualbudget.ilnmors.com
- [x] sure
- comparing sure, ezBookkeeping, and actualbudget
- ezbookkeeping has no function to share the account and budget to the other users.
- actual budget's YNAB way is hard to adjust
- sure is heavy, but it is not YNAB and it allows to share account the other users
- [x] wiki.js
- check wiki.js to use as base wiki of documents.
- [x] TriliumNext
- UNSTABLE, it is impossible to use.
- [ ] memos
- WriteFreely or directus + frontend(Astro)
- MediaCMS or PeerTube
- Funkwhale or Navidrome or Jellyfin
- Kavita
- Audiobookshelf
- Miniflux
@@ -142,8 +154,8 @@
## External Backup server
- OS: DSM \(Synology\)
- Processor: pCPU \(Realtek RTD1619B\)
- OS: DSM (Synology)
- Processor: pCPU (Realtek RTD1619B)
- Memory: 1GiB
- MAC: 90:09:d0:65:a9:db
- Disk:
@@ -151,4 +163,4 @@
- Services:
- SFTP
- Kopia repository server
- CloudSync \(Upload backup files to Cloud\)
- CloudSync (Upload backup files to Cloud)
+8 -8
View File
@@ -5,27 +5,27 @@
### Main server
- Aoostar WTR Pro N150
- Processor: Intel N150 \(4C4T\)
- Processor: Intel N150 (4C4T)
- Graphic: Intel UHD Graphics
- 2.5 Gbps NIC x 2
- M.2 Slot x 2 \(SSD, WiFi\)
- M.2 Slot x 2 (SSD, WiFi)
- SATA bay x 4
- 279,900 KRW
- Samsung DDR4 SO-DIMM 3200 32G x 1
- 106,900 KRW
- Samsung 980 Pro 1TB TLC x 1
- 276,000 KRW \(Previously owned\)
- 276,000 KRW (Previously owned)
- 3RAYS glaicer 6 m.2 SSD heatsink x 1
- 7,330 KRW
- HGST Ultrastar 7K4000 2TB HDD x 3
- 99,000 KRW
- HGST Ultrastar 7K2 2TB HDD x 1
- 43,000 KRW
- Total price: 698,030 KRW \(1,460,030 KRW with previously owned ones\)
- Total price: 698,030 KRW (1,460,030 KRW with previously owned ones)
### Backup server
- Synology DS124
- Processor: Realtek RTD1619B \(4C4T\)
- Processor: Realtek RTD1619B (4C4T)
- Memory: DDR4 1GB
- 1 Gbps NIC x 1
- SATA bay x 1
@@ -34,9 +34,9 @@
- 55,000 KRW
- Total price: 297,000 KRW
### Console \(Laptop\)
### Console (Laptop)
- Microsoft surface laptop 7th ZGJ-00021
- Processor: Snapdragon X Plus \(ARM64, 10C10T\)
- Processor: Snapdragon X Plus (ARM64, 10C10T)
- Memory: LPDDR5x 16GB
- SSD: 256GB SSD
- OS: Windows11 Home
@@ -49,7 +49,7 @@
- EFM 3.5 External HDD case ipTIME HDD3135 Plus x 1
- 29,400 KRW
- Seagate BARRACUDA HDD 2TB x 1
- 99,000 KRW \(Previously owned\)
- 99,000 KRW (Previously owned)
- Total price: 128,400 KRW
## Devices
+8 -8
View File
@@ -12,7 +12,7 @@
|infra|2002|2000|infrastructure|
|auth|2003|2000|authentication and authorization|
|app|2004|2000|services|
|console|2999|2000|console node\(surface\)|
|console|2999|2000|console node(surface)|
### subuid and subgid
@@ -25,8 +25,8 @@
|port number|node|subnet|id|
|:-:|:-:|:-:|:-:|
|1|WTR Pro N150|Trunk|-|
|2|AP\(Preparation\)|USER|20|
|3|DS124\(NAS\)|CLIENT|1|
|2|AP(Preparation)|USER|20|
|3|DS124(NAS)|CLIENT|1|
|4|Console|CLIENT|1|
|5|Printer|CLIENT|1|
|6|-|-|-|
@@ -39,10 +39,10 @@
|name|IPv4|IPv6|id|
|:-:|:-:|:-:|:-:|
|CLIENT|192.168.1.0/24|fd00:1::/64\(ULA\)|1|
|SERVER|192.168.10.0/24|fd00:10::/64\(ULA\)|10|
|CLIENT|192.168.1.0/24|fd00:1::/64(ULA)|1|
|SERVER|192.168.10.0/24|fd00:10::/64(ULA)|10|
|USER|192.168.20.0/24|GUA from ISP|20|
|WG0|192.168.99.0/24|fd00:99::/64\(ULA\)|-|
|WG0|192.168.99.0/24|fd00:99::/64(ULA)|-|
### Host
@@ -68,12 +68,12 @@
- 192.168.99.1
- fd00:99::1
#### blocky \(fw\)
#### blocky (fw)
- SERVER
- 192.168.10.2
- fd00:10::2
#### bind \(fw\)
#### bind (fw)
- SERVER
- 192.168.10.3
- fd00:10::3
+1 -1
View File
@@ -1,6 +1,6 @@
# DHCP (Dynamic Host Configuration Protocol)
Before DHCP emerged, every client had to set their own static IP or using RARP\(Reverse Address Resolution Protocol\). They have critical problems.
Before DHCP emerged, every client had to set their own static IP or using RARP(Reverse Address Resolution Protocol). They have critical problems.
- Static IP
- Each host has their own IP regardless they run or not. It cause lack of IP address.
+1 -1
View File
@@ -36,7 +36,7 @@ Forward zone has basically information of the pair of domain and IP address. The
- Reverse zone
Reverse zone also has basically information of the pair of IP address and domain. The role of this zone is change IP address to domain name. To change domain to IP address it uses specific domain name. \[reversed_ip_address\].in_addr.arpa (i.e. 1.168.192.in-addr.arpa)
Reverse zone also has basically information of the pair of IP address and domain. The role of this zone is change IP address to domain name. To change domain to IP address it uses specific domain name. `[reversed_ip_address].in_addr.arpa` (i.e. 1.168.192.in-addr.arpa)
### Records
+7 -7
View File
@@ -6,11 +6,11 @@ link-local address is for reserved subnets for L2 communication.
### APIPA
When the client couldn't get IP address from DHCP, OS automatically allocate IP address subnet 169.254.0.0/16. This address can never pass L3 point \(router\). It is usually used for internal communication in cloud environment, or PASTA network in containers.
When the client couldn't get IP address from DHCP, OS automatically allocate IP address subnet 169.254.0.0/16. This address can never pass L3 point (router). It is usually used for internal communication in cloud environment, or PASTA network in containers.
### RFC1918
These are originally IP addresses subnet 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 which can communicate beyond L3 point \(Router\). However, it is reserved for LAN \(Local Area Network\) because of the lack of the number of IPv4 address. They can communicate to the other subnets but they cannot be used on WAN environment, which means ISP cannot allocate these IP subnet to their client.
These are originally IP addresses subnet 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 which can communicate beyond L3 point (Router). However, it is reserved for LAN (Local Area Network) because of the lack of the number of IPv4 address. They can communicate to the other subnets but they cannot be used on WAN environment, which means ISP cannot allocate these IP subnet to their client.
## IPv6
@@ -20,11 +20,11 @@ Link-local address is very important in IPv6 unlike IPv4. Basically, every edge
### IPv4
- 127.0.0.1: container itself
- 169.254.0.0/16: container and host communication \(linklocal\)
- 169.254.0.0/16: container and host communication (linklocal)
- RFC1918 for private LAN
- WAN
### IPv6
- \[::1\]: container itself
- \[fe80::\]: container and host communication \(linklocal\)
- \[fd00::\]: (ULA) for private LAN
- [ Global IPv6 ]
- `[::1]`: container itself
- `[fe80::]`: container and host communication (linklocal)
- `[fd00::]`: (ULA) for private LAN
- `[ Global IPv6 ]`
+2 -2
View File
@@ -4,11 +4,11 @@ The concept of hardware passthrough is directly passing the hardware devices to
## GRUB
GRUB \(Grand Unified Bootloader\) is bootloader for OS. They runs first when the computer boots, load the kernel on memory, and binds hardwares on kernel with their driver. The configuration of sequence that GRUB decides is stored on initramfs.
GRUB (Grand Unified Bootloader) is bootloader for OS. They runs first when the computer boots, load the kernel on memory, and binds hardwares on kernel with their driver. The configuration of sequence that GRUB decides is stored on initramfs.
## IOMMU
IOMMU is MMU \(Memory Management Unit\) for I/O device. It convert and isolate logical address of IO devices to physical address of memory, so that I/O device's DMA \(Direct Memory Access\). When IOMMU is enabled on GRUB, they can allocate and manage hardwares address on physical memory. This fact allows, this can reserve physical address area for device to allocate virtual machines.
IOMMU is MMU (Memory Management Unit) for I/O device. It convert and isolate logical address of IO devices to physical address of memory, so that I/O device's DMA (Direct Memory Access). When IOMMU is enabled on GRUB, they can allocate and manage hardwares address on physical memory. This fact allows, this can reserve physical address area for device to allocate virtual machines.
## VFIO