Compare commits

...

37 Commits

Author SHA1 Message Date
il 1dd1c53e2a feat(backup): add archiving of deployed container images 2026-05-11 00:52:28 +09:00
il 530407c162 refactor(all): update hardcoded timezone 'Asia/Seoul' to ansible variable 'timezone' 2026-05-10 18:44:28 +09:00
il 11ab2f5205 fix(sure): correct task name and subuid variable reference 2026-05-10 14:39:54 +09:00
il 4527e39d0f chore(app): archive removed stacks from app
archived stacks:
- actual-budget
- ezbookkeeping
- opencloud
- trilium
- vikunja
- wikijs
2026-05-10 00:07:51 +09:00
il 02fa912cb1 feat(trilium): release trilium
deployment notes:
- oidc error (users cannot access at once, it needs login twice when using oidc
2026-05-09 22:38:57 +09:00
il aceef4bdaa refactor(authelia): update authelia.yaml.j2 to fix redirect_uris from hardcoded uris to ansible variables 2026-05-09 21:44:11 +09:00
il 64aad4fcf0 docs(all): fix markdown syntax and snippets 2026-05-09 20:54:32 +09:00
il 81244d55a7 feat(wiki.js): release wiki.js
deployment notes:
- use this as personal/family wiki system
- compare to affine / memos and triliumNext
2026-05-09 17:50:05 +09:00
il 1cfd024285 refactor(x509-exporter): update notification to restart x509-exporter when its config.yaml is changed 2026-05-09 17:42:35 +09:00
il 26115c5660 feat(redis): update redis from 8.6.1 to 8.6.3
update notes:
- run 'ansible-playbook playbooks/app/site.yaml --tags "site"' so that update all redis at onece
2026-05-09 13:55:28 +09:00
il acef35ca8b feat(postgresql): update postgresql and vectorchord extension
update notes:
- update postgresql version from 18.2 to 18.3
- update vectorchord version from 0.5.3 to 1.1.1
- add update flow and notice to postgresql.md
2026-05-09 13:54:10 +09:00
il b531170bd7 feat(vaultwardne): update vaultwarden from 1.35.8 to 1.36.0 2026-05-09 12:56:22 +09:00
il ad586c3cd3 feat(grafana): update grafana from 12.3.3 to 13.0.1 2026-05-09 12:50:36 +09:00
il 6dfef08f7b feat(prometheus): update prometheus from v3.9.1 to v3.11.3 2026-05-09 12:44:44 +09:00
il 934dd314a8 feat(x509-exporter): update x509-exporter from 3.21.0 to 4.1.0
update notes:
- '--listen-address' and '--watch-dir' cli flags are deprecated
- add '--config' cli flag and config.yaml
2026-05-09 12:44:05 +09:00
il 2529a918df feat(loki): update loki version from 3.6.5 to 3.7.1 2026-05-09 12:17:16 +09:00
il 7dfa20d3dd feat(ldap): update lldap version from 0.6.2 to 0.6.3 2026-05-09 11:56:19 +09:00
il 329620c7d7 feat(alloy): update alloy version from 1.13.0 to 1.16.1 2026-05-09 11:52:25 +09:00
il f820e89cf6 refactor(roles): update binary application installation flow
update notes:
- keep set_cli_tools responsible only for console CLI tools
- download and install kopia from the kopia role
- download and install blocky from the blocky role
- download and install alloy from the alloy role
- reduce console artifact staging for service binaries
2026-05-09 10:46:29 +09:00
il a05951f883 fix(crowdsec): optimize whitelist expressions
update notes:
- add http_status and http_verb for each expressions (actual budget, immich, opencloud)
- fix crowdsec and issues documents
2026-05-07 10:32:11 +09:00
il b404a9e459 fix(crowdsec): update whitelist.yaml to prevent false positive
false positive:
- nextcloud thumbnail/preview 404 problem (crowdsecurity/http-probing)
2026-05-07 10:27:34 +09:00
il 3b4b56f53f fix(nftables): update fw nftables to allow vpn connection regardless of crowdsec ban 2026-05-07 09:22:49 +09:00
il f697715065 feat(sure): release sure (we-promise/sure)
deployment notes:
- let's try three of budget apps, actual budget, ezbookkeeping, and sure
2026-05-06 18:52:31 +09:00
il be7f215380 feat(ezbookkeeping): release ezbookkeeping
deployment notes:
- use ezbookkeeping for budget
- compare to actual budget
- it has no RBAC and sharing budget, try to sure (we-promise/sure)
2026-05-06 15:56:19 +09:00
il 26e0fe4f8b docs(ADR): update ADR 007 - backup to add checking rule and flows 2026-05-06 14:30:25 +09:00
il 2bb1f015e0 fix(kopia): update the bound home path from %h to ansible variable
update note:
- hotfix
- backups haven't run since commit '9f236b6fa5'
- the root service unit's %h always indicates root's home path
- backup service is verified
2026-05-06 14:06:22 +09:00
il 0f546e13b3 fix(btrfs): update btrfs scrub path
update notes:
- from '{{ node['home_path'] }}/data' to '{{ storage['btrfs']['mount_point'] }}'
2026-05-06 10:33:57 +09:00
il ba8b312bf2 feat(btrfs): update btrfs scrub service and timer on app vm 2026-05-06 08:15:53 +09:00
il 6fcedd9162 feat(collabora): release collabora
deployment note:
- link to nextcloud
- document opening is verified (including korean fonts)
2026-05-05 21:20:31 +09:00
il 6ca4f61d50 docs(nextcloud): update security warning decisions and background job annotation
update notes:
- trusted_proxies warning
- HSTS option warning
- background job mode annotation
2026-05-05 20:09:00 +09:00
il 15c09cb899 docs(nextcloud): update how to disable auto generated contacts from nextcloud account 2026-05-03 12:05:11 +09:00
il 880857a70a fix(crowdsec): update parser 'crowdsecurity/nextcloud-whitelist'
update note:
- deprecate custom whitelist expression
- apply 'crowdsecurity/nextcloud-whitelist' parser
2026-05-03 07:19:59 +09:00
il 70bf539546 docs(issues): fix crowdsec whitelist regex to whitelist expressions 2026-05-02 20:40:10 +09:00
il 5dd38b7e49 fix(crowdsec): update whitelist.yaml to prevent false positive
false positive:
- chunk problems (crowdsecurity/http-crawl-non_statics)
- directory upload 404 problem (crowdsecurity/http-probing)
2026-05-02 20:38:48 +09:00
il 33d94211d1 docs(issues): fix crowdsec command 'cscli decision list' to 'cscli decision delete' 2026-05-02 19:46:51 +09:00
il 278dd3cebe feat(nextcloud): release nextcloud
deployment note:
- use nextcloud for groupware
- consider replacing vikunja and opencloud
2026-05-02 19:22:05 +09:00
il d1dcb1984a feat(vaultwarden): update vaultwarden version from 1.35.4 to 1.35.8 2026-04-30 10:03:33 +09:00
163 changed files with 2838 additions and 672 deletions
+2
View File
@@ -2,6 +2,8 @@
data/bin/* data/bin/*
data/volumes/* data/volumes/*
data/images/* data/images/*
!data/images/containers
data/images/containers/*
docs/archives/textfiles/ docs/archives/textfiles/
docs/notes/* docs/notes/*
*.sql *.sql
+5 -5
View File
@@ -1,6 +1,6 @@
# ilnmors homelab README # ilnmors homelab README
This homelab project implements single-node On-premise IaaS system. The homelab contains virtual machines which are divided by their roles, such as private firewall, DNS, PKI, LDAP and database, SSO\(OIDC\). The standard domain is used to implement this system without specific vendors. All components are defined as code and initiated by IaC \(Ansible\) except hypervisor initial configuration. This homelab project implements single-node On-premise IaaS system. The homelab contains virtual machines which are divided by their roles, such as private firewall, DNS, PKI, LDAP and database, SSO(OIDC). The standard domain is used to implement this system without specific vendors. All components are defined as code and initiated by IaC (Ansible) except hypervisor initial configuration.
## RTO times ## RTO times
- Feb/25/2026 - Reprovisioning Hypervisor and vms - Feb/25/2026 - Reprovisioning Hypervisor and vms
@@ -15,12 +15,12 @@ This homelab project implements single-node On-premise IaaS system. The homelab
- Mar/5/2026 - Reprovisioning Hardware and Hypervisor and vms - Mar/5/2026 - Reprovisioning Hardware and Hypervisor and vms
- RTO: 2 hour 20 min - RTO: 2 hour 20 min
- console: 15min - verified - console: 15min - verified
- certificate: 0 min \(When it needs to be created, RTO will be 20 min) - not verified - certificate: 0 min (When it needs to be created, RTO will be 20 min) - not verified
- wireguard: 0 min \(When it needs to be created, RTO will be 1 min) - not verified - wireguard: 0 min (When it needs to be created, RTO will be 1 min) - not verified
- hypervisor\(+fw\): 45 min - verified - hypervisor(+fw): 45 min - verified
- switch: 1 min - verified - switch: 1 min - verified
- dsm: 30 min - verified - dsm: 30 min - verified
- kopia: 0 min \(When it needs to be created, RTO will be 10 min) - verified - kopia: 0 min (When it needs to be created, RTO will be 10 min) - verified
- Extra vms: 30 min - verified - Extra vms: 30 min - verified
- Etc: 30 min - Etc: 30 min
+39 -36
View File
@@ -1,6 +1,7 @@
--- ---
# Global vars # Global vars
ansible_ssh_private_key_file: "/etc/secrets/{{ hostvars['console']['node']['uid'] }}/id_console" ansible_ssh_private_key_file: "/etc/secrets/{{ hostvars['console']['node']['uid'] }}/id_console"
timezone: "Asia/Seoul"
# CA # CA
root_cert_filename: "ilnmors_root_ca.crt" root_cert_filename: "ilnmors_root_ca.crt"
@@ -66,7 +67,7 @@ services:
grafana: grafana:
domain: "grafana" domain: "grafana"
ports: ports:
http: "3000" http: "3000" # Infra server: Internal ports
subuid: "100471" subuid: "100471"
caddy: caddy:
ports: ports:
@@ -97,7 +98,7 @@ services:
public: "gitea" public: "gitea"
internal: "gitea.app" internal: "gitea.app"
ports: ports:
http: "3000" http: "3000" # App server: Public ports
subuid: "100999" subuid: "100999"
immich: immich:
domain: domain:
@@ -109,13 +110,6 @@ services:
immich-ml: immich-ml:
ports: ports:
http: "3003" http: "3003"
actualbudget:
domain:
public: "budget"
internal: "budget.app"
ports:
http: "5006"
subuid: "101000"
paperless: paperless:
domain: domain:
public: "paperless" public: "paperless"
@@ -124,20 +118,6 @@ services:
http: "8001" http: "8001"
redis: "6380" redis: "6380"
subuid: "100999" subuid: "100999"
vikunja:
domain:
public: "vikunja"
internal: "vikunja.app"
ports:
http: "3456"
subuid: "100999"
opencloud:
domain:
public: "opencloud"
internal: "opencloud.app"
ports:
http: "9200"
subuid: "100999"
manticore: manticore:
subuid: "100998" subuid: "100998"
affine: affine:
@@ -148,6 +128,29 @@ services:
http: "3010" http: "3010"
redis: "6381" redis: "6381"
manticore: "9308" manticore: "9308"
nextcloud:
domain:
public: "nextcloud"
internal: "nextcloud.app"
ports:
http: "8002"
redis: "6382"
subuid: "100032"
collabora:
domain:
public: "collabora"
internal: "collabora.app"
ports:
http: "9980"
subuid: "101000"
sure:
domain:
public: "sure"
internal: "sure.app"
ports:
http: "3001"
redis: "6383"
subuid: "100999"
version: version:
packages: packages:
@@ -155,32 +158,32 @@ version:
step: "0.30.2" step: "0.30.2"
kopia: "0.22.3" kopia: "0.22.3"
blocky: "0.29.0" blocky: "0.29.0"
alloy: "1.13.0" alloy: "1.16.1"
containers: containers:
# common # common
caddy: "2.11.2" caddy: "2.11.2"
# infra # infra
step: "0.30.2" step: "0.30.2"
ldap: "v0.6.2" ldap: "v0.6.3"
x509-exporter: "3.21.0" x509-exporter: "4.1.0"
prometheus: "v3.9.1" prometheus: "v3.11.3"
loki: "3.6.5" loki: "3.7.1"
grafana: "12.3.3" grafana: "13.0.1"
## Postgresql ## Postgresql
postgresql: "18.2" postgresql: "18.3"
# For immich - https://github.com/immich-app/base-images/blob/main/postgres/versions.yaml # For immich - https://github.com/immich-app/base-images/blob/main/postgres/versions.yaml
# pgvector: "v0.8.1" # pgvector: "v0.8.1"
vectorchord: "0.5.3" vectorchord: "1.1.1"
# Auth # Auth
authelia: "4.39.19" authelia: "4.39.19"
# App # App
vaultwarden: "1.35.4" vaultwarden: "1.36.0"
gitea: "1.26.1" gitea: "1.26.1"
redis: "8.6.1" redis: "8.6.3"
immich: "v2.7.5" immich: "v2.7.5"
actualbudget: "26.3.0"
paperless: "2.20.15" paperless: "2.20.15"
vikunja: "2.2.2"
opencloud: "4.0.6"
manticore: "25.0.0" manticore: "25.0.0"
affine: "0.26.3" affine: "0.26.3"
nextcloud: "33.0.3"
collabora: "25.04.9.4.1"
sure: "0.7.0-hotfix.2"
+25 -26
View File
@@ -23,9 +23,9 @@
tags: ["always"] tags: ["always"]
tasks: tasks:
- name: Set timezone to Asia/Seoul - name: Set timezone
community.general.timezone: community.general.timezone:
name: Asia/Seoul name: "{{ timezone }}"
become: true become: true
tags: ["init", "timezone"] tags: ["init", "timezone"]
@@ -185,14 +185,6 @@
tags: ["site", "immich"] tags: ["site", "immich"]
tags: ["site", "immich"] tags: ["site", "immich"]
- name: Set actual budget
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_actual-budget"
apply:
tags: ["site", "actual-budget"]
tags: ["site", "actual-budget"]
- name: Set paperless - name: Set paperless
ansible.builtin.include_role: ansible.builtin.include_role:
name: "app" name: "app"
@@ -201,22 +193,6 @@
tags: ["site", "paperless"] tags: ["site", "paperless"]
tags: ["site", "paperless"] tags: ["site", "paperless"]
- name: Set vikunja
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_vikunja"
apply:
tags: ["site", "vikunja"]
tags: ["site", "vikunja"]
- name: Set opencloud
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_opencloud"
apply:
tags: ["site", "opencloud"]
tags: ["site", "opencloud"]
- name: Set affine - name: Set affine
ansible.builtin.include_role: ansible.builtin.include_role:
name: "app" name: "app"
@@ -225,6 +201,29 @@
tags: ["site", "affine"] tags: ["site", "affine"]
tags: ["site", "affine"] tags: ["site", "affine"]
- name: Set nextcloud
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_nextcloud"
apply:
tags: ["site", "nextcloud"]
tags: ["site", "nextcloud"]
- name: Set collabora
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_collabora"
apply:
tags: ["site", "collabora"]
tags: ["site", "collabora"]
- name: Set sure
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_sure"
apply:
tags: ["site", "sure"]
tags: ["site", "sure"]
- name: Flush handlers right now - name: Flush handlers right now
ansible.builtin.meta: "flush_handlers" ansible.builtin.meta: "flush_handlers"
+2 -2
View File
@@ -23,9 +23,9 @@
tags: ["always"] tags: ["always"]
tasks: tasks:
- name: Set timezone to Asia/Seoul - name: Set timezone
community.general.timezone: community.general.timezone:
name: Asia/Seoul name: "{{ timezone }}"
become: true become: true
tags: ["init", "timezone"] tags: ["init", "timezone"]
+10 -2
View File
@@ -24,9 +24,9 @@
tasks: tasks:
# init # init
- name: Set timezone to Asia/Seoul - name: Set timezone
community.general.timezone: community.general.timezone:
name: Asia/Seoul name: "{{ timezone }}"
become: true become: true
tags: ["init", "timezone"] tags: ["init", "timezone"]
@@ -122,3 +122,11 @@
apply: apply:
tags: ["init", "site", "tools"] tags: ["init", "site", "tools"]
tags: ["init", "site", "tools"] tags: ["init", "site", "tools"]
- name: Set kopia
ansible.builtin.include_role:
name: "common"
tasks_from: "services/set_kopia"
apply:
tags: ["init", "site", "kopia"]
tags: ["init", "site", "kopia"]
+2 -2
View File
@@ -23,9 +23,9 @@
tags: ["always"] tags: ["always"]
tasks: tasks:
- name: Set timezone to Asia/Seoul - name: Set timezone
community.general.timezone: community.general.timezone:
name: Asia/Seoul name: "{{ timezone }}"
become: true become: true
tags: ["init", "timezone"] tags: ["init", "timezone"]
+2 -2
View File
@@ -23,9 +23,9 @@
tags: ["always"] tags: ["always"]
tasks: tasks:
- name: Set timezone to Asia/Seoul - name: Set timezone
community.general.timezone: community.general.timezone:
name: Asia/Seoul name: "{{ timezone }}"
become: true become: true
tags: ["init", "timezone"] tags: ["init", "timezone"]
+2 -2
View File
@@ -30,9 +30,9 @@
tags: ["always"] tags: ["always"]
tasks: tasks:
# init # init
- name: Set timezone to Asia/Seoul - name: Set timezone
community.general.timezone: community.general.timezone:
name: Asia/Seoul name: "{{ timezone }}"
become: true become: true
tags: ["init", "timezone"] tags: ["init", "timezone"]
+37 -34
View File
@@ -43,17 +43,6 @@
listen: "notification_restart_immich-ml" listen: "notification_restart_immich-ml"
ignore_errors: true # noqa: ignore-errors ignore_errors: true # noqa: ignore-errors
- name: Restart actual-budget
ansible.builtin.systemd:
name: "actual-budget.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_actual-budget"
ignore_errors: true # noqa: ignore-errors
- name: Restart paperless - name: Restart paperless
ansible.builtin.systemd: ansible.builtin.systemd:
name: "paperless.service" name: "paperless.service"
@@ -65,29 +54,6 @@
listen: "notification_restart_paperless" listen: "notification_restart_paperless"
ignore_errors: true # noqa: ignore-errors ignore_errors: true # noqa: ignore-errors
- name: Restart vikunja
ansible.builtin.systemd:
name: "vikunja.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_vikunja"
ignore_errors: true # noqa: ignore-errors
- name: Restart opencloud
ansible.builtin.systemd:
name: "opencloud.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_opencloud_init.stat.exists
changed_when: false
listen: "notification_restart_opencloud"
ignore_errors: true # noqa: ignore-errors
- name: Restart affine - name: Restart affine
ansible.builtin.systemd: ansible.builtin.systemd:
name: "affine.service" name: "affine.service"
@@ -99,3 +65,40 @@
changed_when: false changed_when: false
listen: "notification_restart_affine" listen: "notification_restart_affine"
ignore_errors: true # noqa: ignore-errors ignore_errors: true # noqa: ignore-errors
- name: Restart nextcloud
ansible.builtin.systemd:
name: "nextcloud.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_nextcloud_init.stat.exists
changed_when: false
listen: "notification_restart_nextcloud"
ignore_errors: true # noqa: ignore-errors
- name: Restart collabora
ansible.builtin.systemd:
name: "collabora.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_collabora"
ignore_errors: true # noqa: ignore-errors
- name: Restart sure
ansible.builtin.systemd:
name: "{{ item }}"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
loop:
- "sure-web.service"
- "sure-worker.service"
changed_when: false
listen: "notification_restart_sure"
ignore_errors: true # noqa: ignore-errors
@@ -68,3 +68,23 @@
group: "svadmins" group: "svadmins"
mode: "0770" mode: "0770"
become: true become: true
- name: Deploy btrfs scrub service and timer
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/app/btrfs/{{ item }}.j2"
dest: "/etc/systemd/system/{{ item }}"
owner: "root"
group: "root"
mode: "0644"
loop:
- "btrfs-scrub.service"
- "btrfs-scrub.timer"
become: true
- name: Enable auto btrfs scrub
ansible.builtin.systemd:
name: "btrfs-scrub.timer"
state: "started"
enabled: true
daemon_reload: true
become: true
@@ -161,3 +161,38 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/{{ item.file }}.tar"
loop:
- image: "docker.io/manticoresearch/manticore:{{ version['containers']['manticore'] }}"
file: "docker.io_manticoresearch_manticore_{{ version['containers']['manticore'] }}"
- image: "docker.io/library/redis:{{ version['containers']['redis'] }}"
file: "docker.io_library_redis_{{ version['containers']['redis'] }}"
- image: "ghcr.io/toeverything/affine:{{ version['containers']['affine'] }}"
file: "ghcr.io_toeverything_affine_{{ version['containers']['affine'] }}"
loop_control:
label: "{{ item.file }}"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "{{ item.item.image }}"
dest: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
format: "oci-archive"
force: false
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
when: not item.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
@@ -0,0 +1,37 @@
---
- name: Deploy container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/collabora/collabora.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/collabora.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_collabora"
- name: Enable collabora.service
ansible.builtin.systemd:
name: "collabora.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_collabora_code_{{ version['containers']['collabora'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/collabora/code:{{ version['containers']['collabora'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_collabora_code_{{ version['containers']['collabora'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_collabora_code_{{ version['containers']['collabora'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -49,3 +49,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_gitea_gitea_{{ version['containers']['gitea'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/gitea/gitea:{{ version['containers']['gitea'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_gitea_gitea_{{ version['containers']['gitea'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_gitea_gitea_{{ version['containers']['gitea'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -118,3 +118,38 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/{{ item.file }}.tar"
loop:
- image: "docker.io/library/redis:{{ version['containers']['redis'] }}"
file: "docker.io_library_redis_{{ version['containers']['redis'] }}"
- image: "ghcr.io/immich-app/immich-machine-learning:{{ version['containers']['immich'] }}-openvino"
file: "ghcr.io_immich-app_immich-machine-learning_{{ version['containers']['immich'] }}-openvino"
- image: "ghcr.io/immich-app/immich-server:{{ version['containers']['immich'] }}"
file: "ghcr.io_immich-app_immich-server_{{ version['containers']['immich'] }}"
loop_control:
label: "{{ item.file }}"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "{{ item.item.image }}"
dest: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
format: "oci-archive"
force: false
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
when: not item.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
@@ -0,0 +1,209 @@
---
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "nextcloud"
- name: Create redis_nextcloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "containers/redis"
- "containers/redis/{{ redis_service }}"
- "containers/redis/{{ redis_service }}/data"
become: true
- name: Deploy redis config file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.conf.j2"
dest: "{{ node['home_path'] }}/containers/redis/{{ redis_service }}/redis.conf"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/redis_{{ redis_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
name: "redis_{{ redis_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create nextcloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/nextcloud"
- "data/containers/nextcloud/html"
- "containers/nextcloud"
- "containers/nextcloud/ssl"
- "containers/nextcloud/ini"
become: true
- name: Check data directory empty
ansible.builtin.stat:
path: "{{ node['home_path'] }}/data/containers/nextcloud/.init"
register: "is_nextcloud_init"
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/nextcloud/ssl/{{ root_cert_filename }}"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_nextcloud"
no_log: true
- name: Initialize nextcloud
when: not is_nextcloud_init.stat.exists
block:
- name: Execute init command (Including pulling image)
containers.podman.podman_container:
name: "nextcloud_init"
image: "docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}"
command: "/bin/true"
state: "started"
rm: true
detach: false
env:
NEXTCLOUD_UPDATE: "1"
NEXTCLOUD_ADMIN_USER: "admin-local"
NEXTCLOUD_ADMIN_PASSWORD: "{{ hostvars['console']['nextcloud']['admin-local']['password'] }}"
POSTGRES_HOST: "{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}"
POSTGRES_DB: "nextcloud_db"
POSTGRES_USER: "nextcloud"
POSTGRES_PASSWORD: "{{ hostvars['console']['postgresql']['password']['nextcloud'] }}"
PGSSLMODE: "verify-full"
PGSSLROOTCERT: "/etc/ssl/nextcloud/{{ root_cert_filename }}"
PGSSLCERTMODE: "disable"
REDIS_HOST: "host.containers.internal"
REDIS_HOST_PORT: "{{ services['nextcloud']['ports']['redis'] }}"
volume:
- "{{ node['home_path'] }}/containers/nextcloud/ssl:/etc/ssl/nextcloud:ro"
- "{{ node['home_path'] }}/data/containers/nextcloud/html:/var/www/html:rw"
no_log: true
- name: Create .init file
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers/nextcloud/.init"
state: "touch"
mode: "0644"
owner: "{{ ansible_user }}"
group: "svadmins"
- name: Deploy config files
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/config/{{ item }}.j2"
dest: "{{ node['home_path'] }}/data/containers/nextcloud/html/config/{{ item }}"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0640"
loop:
- "background.config.php"
- "cache.config.php"
- "domain.config.php"
- "local_remote.config.php"
- "user_oidc.config.php"
become: true
notify: "notification_restart_nextcloud"
- name: Deploy opcache.ini file
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/ini/{{ item }}"
dest: "{{ node['home_path'] }}/containers/nextcloud/ini/{{ item }}"
group: "svadmins"
mode: "0644"
loop:
- "opcache.ini"
- "upload.ini"
notify: "notification_restart_nextcloud"
- name: Deploy nextcloud.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/nextcloud.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/nextcloud.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_nextcloud"
- name: Deploy nextcloud-cron service
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/systemd/{{ item }}"
dest: "{{ node['home_path'] }}/.config/systemd/user/{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
loop:
- "nextcloud-cron.service"
- "nextcloud-cron.timer"
- name: Enable nextcloud.service
ansible.builtin.systemd:
name: "nextcloud.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
- name: Enable nextcloud-cron.timer
ansible.builtin.systemd:
name: "nextcloud-cron.timer"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/{{ item.file }}.tar"
loop:
- image: "docker.io/library/redis:{{ version['containers']['redis'] }}"
file: "docker.io_library_redis_{{ version['containers']['redis'] }}"
- image: "docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}"
file: "docker.io_library_nextcloud_{{ version['containers']['nextcloud'] }}"
loop_control:
label: "{{ item.file }}"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "{{ item.item.image }}"
dest: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
format: "oci-archive"
force: false
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
when: not item.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
@@ -122,3 +122,36 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/{{ item.file }}.tar"
loop:
- image: "docker.io/library/redis:{{ version['containers']['redis'] }}"
file: "docker.io_library_redis_{{ version['containers']['redis'] }}"
- image: "ghcr.io/paperless-ngx/paperless-ngx:{{ version['containers']['paperless'] }}"
file: "ghcr.io_paperless-ngx_paperless-ngx_{{ version['containers']['paperless'] }}"
loop_control:
label: "{{ item.file }}"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "{{ item.item.image }}"
dest: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
format: "oci-archive"
force: false
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
when: not item.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
@@ -0,0 +1,143 @@
---
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "sure"
- name: Create redis_sure directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "containers/redis"
- "containers/redis/{{ redis_service }}"
- "containers/redis/{{ redis_service }}/data"
become: true
- name: Deploy redis config file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.conf.j2"
dest: "{{ node['home_path'] }}/containers/redis/{{ redis_service }}/redis.conf"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/redis_{{ redis_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
name: "redis_{{ redis_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create sure directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['sure']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/sure"
- "data/containers/sure/storage"
- "containers/sure"
- "containers/sure/ssl"
become: true
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/sure/ssl/{{ root_cert_filename }}"
owner: "{{ services['sure']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_sure"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "{{ item.name }}"
data: "{{ item.value }}"
state: "present"
force: true
loop:
- name: "SURE_SECRET_KEY_BASE"
value: "{{ hostvars['console']['sure']['session_secret'] }}"
- name: "SURE_POSTGRES_PASSWORD"
value: "{{ hostvars['console']['postgresql']['password']['sure'] }}"
- name: "SURE_OIDC_CLIENT_SECRET"
value: "{{ hostvars['console']['sure']['oidc']['secret'] }}"
notify: "notification_restart_sure"
no_log: true
- name: Deploy sure.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/sure/{{ item }}.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
loop:
- "sure-web.container"
- "sure-worker.container"
notify: "notification_restart_sure"
- name: Enable sure.service
ansible.builtin.systemd:
name: "{{ item }}"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
loop:
- "sure-web.service"
- "sure-worker.service"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/{{ item.file }}.tar"
loop:
- image: "docker.io/library/redis:{{ version['containers']['redis'] }}"
file: "docker.io_library_redis_{{ version['containers']['redis'] }}"
- image: "ghcr.io/we-promise/sure:{{ version['containers']['sure'] }}"
file: "ghcr.io_we-promise_sure_{{ version['containers']['sure'] }}"
loop_control:
label: "{{ item.file }}"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "{{ item.item.image }}"
dest: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
format: "oci-archive"
force: false
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
when: not item.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/{{ item.item.file }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
loop: "{{ container_archive_images.results }}"
loop_control:
label: "{{ item.item.file }}"
@@ -55,3 +55,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_vaultwarden_server_{{ version['containers']['vaultwarden'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/vaultwarden/server:{{ version['containers']['vaultwarden'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_vaultwarden_server_{{ version['containers']['vaultwarden'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_vaultwarden_server_{{ version['containers']['vaultwarden'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -76,3 +76,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_authelia_authelia_{{ version['containers']['authelia'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/authelia/authelia:{{ version['containers']['authelia'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_authelia_authelia_{{ version['containers']['authelia'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_authelia_authelia_{{ version['containers']['authelia'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -5,9 +5,10 @@
- hardware - hardware
become: true become: true
- name: Deploy alloy deb file (x86_64) - name: Download alloy deb file (x86_64)
ansible.builtin.copy: ansible.builtin.get_url:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/alloy-{{ version['packages']['alloy'] }}-amd64.deb" url: "https://github.com/grafana/alloy/releases/download/v{{ version['packages']['alloy'] }}/\
alloy-{{ version['packages']['alloy'] }}-1.amd64.deb"
dest: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb" dest: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb"
owner: "root" owner: "root"
group: "root" group: "root"
@@ -15,9 +16,10 @@
become: true become: true
when: ansible_facts['architecture'] == "x86_64" when: ansible_facts['architecture'] == "x86_64"
- name: Deploy alloy deb file (aarch64) - name: Download alloy deb file (aarch64)
ansible.builtin.copy: ansible.builtin.get_url:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/alloy-{{ version['packages']['alloy'] }}-arm64.deb" url: "https://github.com/grafana/alloy/releases/download/v{{ version['packages']['alloy'] }}/\
alloy-{{ version['packages']['alloy'] }}-1.arm64.deb"
dest: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb" dest: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb"
owner: "root" owner: "root"
group: "root" group: "root"
@@ -30,6 +32,7 @@
deb: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb" deb: "/var/cache/apt/archives/alloy-{{ version['packages']['alloy'] }}.deb"
state: "present" state: "present"
become: true become: true
notify: "notification_restart_alloy"
- name: Deploy alloy config - name: Deploy alloy config
ansible.builtin.template: ansible.builtin.template:
@@ -97,3 +97,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/ilnmors.internal_{{ node['name'] }}_caddy_{{ version['containers']['caddy'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "ilnmors.internal/{{ node['name'] }}/caddy:{{ version['containers']['caddy'] }}"
dest: "{{ node['home_path'] }}/archives/containers/ilnmors.internal_{{ node['name'] }}_caddy_{{ version['containers']['caddy'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/ilnmors.internal_{{ node['name'] }}_caddy_{{ version['containers']['caddy'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -36,10 +36,15 @@
ansible.builtin.set_fact: ansible.builtin.set_fact:
acquisd_list: acquisd_list:
fw: fw:
collection: "crowdsecurity/suricata" collection:
- "crowdsecurity/suricata"
parser: []
config: "suricata.yaml" config: "suricata.yaml"
auth: auth:
collection: "crowdsecurity/caddy" collection:
- "crowdsecurity/caddy"
parser:
- "crowdsecurity/nextcloud-whitelist"
config: "caddy.yaml" config: "caddy.yaml"
- name: Deploy crowdsec-update service files - name: Deploy crowdsec-update service files
@@ -181,7 +186,8 @@
block: block:
- name: Install crowdsec collection - name: Install crowdsec collection
ansible.builtin.command: ansible.builtin.command:
cmd: "cscli collections install {{ acquisd_list[node['name']]['collection'] }}" cmd: "cscli collections install {{ item }}"
loop: "{{ acquisd_list[node['name']]['collection'] }}"
become: true become: true
changed_when: "'overwrite' not in is_collection_installed.stderr" changed_when: "'overwrite' not in is_collection_installed.stderr"
failed_when: failed_when:
@@ -189,6 +195,17 @@
- "'already installed' not in is_collection_installed.stderr" - "'already installed' not in is_collection_installed.stderr"
register: "is_collection_installed" register: "is_collection_installed"
- name: Install crowdsec parser
ansible.builtin.command:
cmd: "cscli parsers install {{ item }}"
loop: "{{ acquisd_list[node['name']]['parser'] }}"
become: true
changed_when: "'overwrite' not in is_parser_installed.stderr"
failed_when:
- is_parser_installed.rc != 0
- "'already installed' not in is_parser_installed.stderr"
register: "is_parser_installed"
- name: Create crowdsec acquis.d directory - name: Create crowdsec acquis.d directory
ansible.builtin.file: ansible.builtin.file:
path: "/etc/crowdsec/acquis.d" path: "/etc/crowdsec/acquis.d"
@@ -5,34 +5,36 @@
- hardware - hardware
become: true become: true
- name: Check kopia installation
ansible.builtin.shell: |
command -v kopia
changed_when: false
failed_when: false
register: "is_kopia_installed"
ignore_errors: true
- name: Set console kopia - name: Set console kopia
when: node['name'] == 'console' when: node['name'] == 'console'
block: block:
- name: Apply cli tools (x86_64) - name: Download kopia
ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_{{ item }}.deb"
dest: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-{{ item }}.deb"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "amd64"
- "arm64"
- name: Install kopia (x86_64)
ansible.builtin.apt: ansible.builtin.apt:
deb: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-amd64.deb" deb: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-amd64.deb"
state: "present" state: "present"
become: true become: true
when: when: ansible_facts['architecture'] == "x86_64"
- ansible_facts['architecture'] == "x86_64"
- is_kopia_installed.rc != 0 - name: Install kopia (aarch64)
- name: Apply cli tools (aarch64)
ansible.builtin.apt: ansible.builtin.apt:
deb: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-arm64.deb" deb: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-arm64.deb"
state: "present" state: "present"
become: true become: true
when: when: ansible_facts['architecture'] == "aarch64"
- ansible_facts['architecture'] == "aarch64"
- is_kopia_installed.rc != 0 - name: Connect console kopia server
- name: Connect kopia server
environment: environment:
KOPIA_PASSWORD: "{{ hostvars['console']['kopia']['user']['console'] }}" KOPIA_PASSWORD: "{{ hostvars['console']['kopia']['user']['console'] }}"
ansible.builtin.shell: | ansible.builtin.shell: |
@@ -51,30 +53,36 @@
- name: Set kopia uid - name: Set kopia uid
ansible.builtin.set_fact: ansible.builtin.set_fact:
kopia_uid: 951 kopia_uid: 951
- name: Deploy kopia deb file (x86_64)
ansible.builtin.copy: - name: Download kopia deb file (x86_64)
src: "{{ hostvars['console']['node']['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-amd64.deb" ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_amd64.deb"
dest: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb" dest: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb"
owner: "root" owner: "root"
group: "root" group: "root"
mode: "0644" mode: "0644"
become: true become: true
when: ansible_facts['architecture'] == "x86_64" when: ansible_facts['architecture'] == "x86_64"
- name: Deploy kopia deb file (aarch64)
ansible.builtin.copy: - name: Download kopia deb file (aarch64)
src: "{{ hostvars['console']['node']['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-arm64.deb" ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_arm64.deb"
dest: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb" dest: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb"
owner: "root" owner: "root"
group: "root" group: "root"
mode: "0644" mode: "0644"
become: true become: true
when: ansible_facts['architecture'] == "aarch64" when: ansible_facts['architecture'] == "aarch64"
- name: Create kopia group - name: Create kopia group
ansible.builtin.group: ansible.builtin.group:
name: "kopia" name: "kopia"
gid: "{{ kopia_uid }}" gid: "{{ kopia_uid }}"
state: "present" state: "present"
become: true become: true
- name: Create kopia user - name: Create kopia user
ansible.builtin.user: ansible.builtin.user:
name: "kopia" name: "kopia"
@@ -85,6 +93,7 @@
comment: "Kopia backup User" comment: "Kopia backup User"
state: "present" state: "present"
become: true become: true
- name: Create kopia directory - name: Create kopia directory
ansible.builtin.file: ansible.builtin.file:
path: "{{ item.name }}" path: "{{ item.name }}"
@@ -101,12 +110,13 @@
mode: "0700" mode: "0700"
become: true become: true
no_log: true no_log: true
- name: Install kopia - name: Install kopia
ansible.builtin.apt: ansible.builtin.apt:
deb: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb" deb: "/var/cache/apt/archives/kopia-{{ version['packages']['kopia'] }}.deb"
state: "present" state: "present"
become: true become: true
when: is_kopia_installed.rc != 0
- name: Deploy kopia env - name: Deploy kopia env
ansible.builtin.template: ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/common/kopia/kopia.env.j2" src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/common/kopia/kopia.env.j2"
@@ -116,6 +126,7 @@
mode: "0400" mode: "0400"
become: true become: true
no_log: true no_log: true
- name: Deploy kopia service files - name: Deploy kopia service files
ansible.builtin.template: ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/common/kopia/{{ item }}.j2" src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/common/kopia/{{ item }}.j2"
@@ -128,6 +139,7 @@
- "kopia-backup.service" - "kopia-backup.service"
- "kopia-backup.timer" - "kopia-backup.timer"
become: true become: true
- name: Enable auto kopia rules update - name: Enable auto kopia rules update
ansible.builtin.systemd: ansible.builtin.systemd:
name: "kopia-backup.timer" name: "kopia-backup.timer"
@@ -24,6 +24,17 @@
mode: "0770" mode: "0770"
when: node['name'] == "app" when: node['name'] == "app"
- name: Create container image archive directory
ansible.builtin.file:
path: "{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
state: "directory"
mode: "0700"
loop:
- "{{ node['home_path'] }}/archives"
- "{{ node['home_path'] }}/archives/containers"
- name: Install podman and reset ssh connection for initiating - name: Install podman and reset ssh connection for initiating
when: is_podman_installed.rc != 0 when: is_podman_installed.rc != 0
become: true become: true
@@ -49,42 +49,6 @@
- "amd64" - "amd64"
- "arm64" - "arm64"
- name: Download kopia
ansible.builtin.get_url:
url: "https://github.com/kopia/kopia/releases/download/v{{ version['packages']['kopia'] }}/\
kopia_{{ version['packages']['kopia'] }}_linux_{{ item }}.deb"
dest: "{{ node['data_path'] }}/bin/kopia-{{ version['packages']['kopia'] }}-{{ item }}.deb"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "amd64"
- "arm64"
- name: Download blocky
ansible.builtin.get_url:
url: "https://github.com/0xERR0R/blocky/releases/download/v{{ version['packages']['blocky'] }}/\
blocky_v{{ version['packages']['blocky'] }}_Linux_{{ item }}.tar.gz"
dest: "{{ node['data_path'] }}/bin/blocky-{{ version['packages']['blocky'] }}-{{ item }}.tar.gz"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "x86_64"
- "arm64"
- name: Download alloy
ansible.builtin.get_url:
url: "https://github.com/grafana/alloy/releases/download/v{{ version['packages']['alloy'] }}/\
alloy-{{ version['packages']['alloy'] }}-1.{{ item }}.deb"
dest: "{{ node['data_path'] }}/bin/alloy-{{ version['packages']['alloy'] }}-{{ item }}.deb"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0600"
loop:
- "amd64"
- "arm64"
- name: Apply cli tools (x86_64) - name: Apply cli tools (x86_64)
ansible.builtin.apt: ansible.builtin.apt:
deb: "{{ node['data_path'] }}/bin/{{ item }}" deb: "{{ node['data_path'] }}/bin/{{ item }}"
@@ -92,7 +56,6 @@
loop: loop:
- "sops-{{ version['packages']['sops'] }}-amd64.deb" - "sops-{{ version['packages']['sops'] }}-amd64.deb"
- "step-{{ version['packages']['step'] }}-amd64.deb" - "step-{{ version['packages']['step'] }}-amd64.deb"
- "kopia-{{ version['packages']['kopia'] }}-amd64.deb"
become: true become: true
when: ansible_facts['architecture'] == "x86_64" when: ansible_facts['architecture'] == "x86_64"
@@ -103,6 +66,5 @@
loop: loop:
- "sops-{{ version['packages']['sops'] }}-arm64.deb" - "sops-{{ version['packages']['sops'] }}-arm64.deb"
- "step-{{ version['packages']['step'] }}-arm64.deb" - "step-{{ version['packages']['step'] }}-arm64.deb"
- "kopia-{{ version['packages']['kopia'] }}-arm64.deb"
become: true become: true
when: ansible_facts['architecture'] == "aarch64" when: ansible_facts['architecture'] == "aarch64"
@@ -23,7 +23,7 @@
state: "present" state: "present"
become: true become: true
- name: Create blocky etc directory - name: Create blocky directory
ansible.builtin.file: ansible.builtin.file:
path: "{{ item }}" path: "{{ item }}"
owner: "blocky" owner: "blocky"
@@ -31,13 +31,38 @@
mode: "0750" mode: "0750"
state: "directory" state: "directory"
loop: loop:
- "/home/blocky"
- "/home/blocky/bin"
- "/etc/blocky" - "/etc/blocky"
- "/etc/blocky/ssl" - "/etc/blocky/ssl"
become: true become: true
- name: Download blocky (x86_64)
ansible.builtin.get_url:
url: "https://github.com/0xERR0R/blocky/releases/download/v{{ version['packages']['blocky'] }}/\
blocky_v{{ version['packages']['blocky'] }}_Linux_x86_64.tar.gz"
dest: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-x86_64.tar.gz"
owner: "blocky"
group: "blocky"
mode: "0600"
become: true
when: ansible_facts['architecture'] == "x86_64"
- name: Download blocky (aarch64)
ansible.builtin.get_url:
url: "https://github.com/0xERR0R/blocky/releases/download/v{{ version['packages']['blocky'] }}/\
blocky_v{{ version['packages']['blocky'] }}_Linux_arm64.tar.gz"
dest: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-arm64.tar.gz"
owner: "blocky"
group: "blocky"
mode: "0600"
become: true
when: ansible_facts['architecture'] == "aarch64"
- name: Deploy blocky binary file (x86_64) - name: Deploy blocky binary file (x86_64)
ansible.builtin.unarchive: ansible.builtin.unarchive:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/blocky-{{ version['packages']['blocky'] }}-x86_64.tar.gz" src: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-x86_64.tar.gz"
remote_src: true
dest: "/usr/local/bin/" dest: "/usr/local/bin/"
owner: "root" owner: "root"
group: "root" group: "root"
@@ -52,7 +77,8 @@
- name: Deploy blocky binary file (aarch64) - name: Deploy blocky binary file (aarch64)
ansible.builtin.unarchive: ansible.builtin.unarchive:
src: "{{ hostvars['console']['node']['data_path'] }}/bin/blocky-{{ version['packages']['blocky'] }}-arm64.tar.gz" src: "/home/blocky/bin/blocky-{{ version['packages']['blocky'] }}-arm64.tar.gz"
remote_src: true
dest: "/usr/local/bin/" dest: "/usr/local/bin/"
owner: "root" owner: "root"
group: "root" group: "root"
@@ -78,3 +78,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_smallstep_step-ca_{{ version['containers']['step'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/smallstep/step-ca:{{ version['containers']['step'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_smallstep_step-ca_{{ version['containers']['step'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_smallstep_step-ca_{{ version['containers']['step'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -83,3 +83,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_grafana_grafana_{{ version['containers']['grafana'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/grafana/grafana:{{ version['containers']['grafana'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_grafana_grafana_{{ version['containers']['grafana'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_grafana_grafana_{{ version['containers']['grafana'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -75,7 +75,7 @@
rm: true rm: true
detach: false detach: false
env: env:
TZ: "Asia/Seoul" TZ: "{{ timezone }}"
LLDAP_LDAP_BASE_DN: "{{ domain['dc'] }}" LLDAP_LDAP_BASE_DN: "{{ domain['dc'] }}"
secrets: secrets:
- "LLDAP_DATABASE_URL,type=env" - "LLDAP_DATABASE_URL,type=env"
@@ -108,3 +108,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_lldap_lldap_{{ version['containers']['ldap'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/lldap/lldap:{{ version['containers']['ldap'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_lldap_lldap_{{ version['containers']['ldap'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_lldap_lldap_{{ version['containers']['ldap'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -64,3 +64,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_grafana_loki_{{ version['containers']['loki'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/grafana/loki:{{ version['containers']['loki'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_grafana_loki_{{ version['containers']['loki'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_grafana_loki_{{ version['containers']['loki'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -9,8 +9,9 @@
- "gitea" - "gitea"
- "immich" - "immich"
- "paperless" - "paperless"
- "vikunja"
- "affine" - "affine"
- "nextcloud"
- "sure"
- name: Create postgresql directory - name: Create postgresql directory
ansible.builtin.file: ansible.builtin.file:
@@ -171,3 +172,26 @@
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
loop: "{{ connected_services }}" loop: "{{ connected_services }}"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/\
ilnmors.internal_{{ node['name'] }}_postgres_pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "ilnmors.internal/{{ node['name'] }}/postgres:pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}"
dest: "{{ node['home_path'] }}/archives/containers/\
ilnmors.internal_{{ node['name'] }}_postgres_pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/\
ilnmors.internal_{{ node['name'] }}_postgres_pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -68,3 +68,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_prom_prometheus_{{ version['containers']['prometheus'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/prom/prometheus:{{ version['containers']['prometheus'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_prom_prometheus_{{ version['containers']['prometheus'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_prom_prometheus_{{ version['containers']['prometheus'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
@@ -8,9 +8,20 @@
mode: "0770" mode: "0770"
loop: loop:
- "x509-exporter" - "x509-exporter"
- "x509-exporter/config"
- "x509-exporter/certs" - "x509-exporter/certs"
become: true become: true
- name: Deploy config.yaml
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/x509-exporter/config/config.yaml"
dest: "{{ node['home_path'] }}/containers/x509-exporter/config/config.yaml"
owner: "{{ services['x509-exporter']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_x509-exporter"
- name: Deploy certificates - name: Deploy certificates
ansible.builtin.copy: ansible.builtin.copy:
content: | content: |
@@ -57,3 +68,23 @@
enabled: true enabled: true
daemon_reload: true daemon_reload: true
scope: "user" scope: "user"
- name: Check container archive images
ansible.builtin.stat:
path: "{{ node['home_path'] }}/archives/containers/docker.io_enix_x509-certificate-exporter_{{ version['containers']['x509-exporter'] }}.tar"
register: container_archive_images
- name: Save container archive images
containers.podman.podman_save:
image:
- "docker.io/enix/x509-certificate-exporter:{{ version['containers']['x509-exporter'] }}"
dest: "{{ node['home_path'] }}/archives/containers/docker.io_enix_x509-certificate-exporter_{{ version['containers']['x509-exporter'] }}.tar"
format: "oci-archive"
force: false
when: not container_archive_images.stat.exists
- name: Fetch container archive images
ansible.builtin.fetch:
src: "{{ node['home_path'] }}/archives/containers/docker.io_enix_x509-certificate-exporter_{{ version['containers']['x509-exporter'] }}.tar"
dest: "{{ hostvars['console']['node']['data_path'] }}/images/containers/"
flat: true
+2
View File
@@ -82,6 +82,8 @@ table inet filter {
chain global { chain global {
# invalid packets # invalid packets
ct state invalid drop comment "deny invalid connection" ct state invalid drop comment "deny invalid connection"
# VPN connection exception handling
udp dport $PORTS_VPN return comment "return vpn connection to input and forward chain"
# crowdsec # crowdsec
ip saddr @crowdsec-blacklists counter drop comment "deny all crowdsec blacklist" ip saddr @crowdsec-blacklists counter drop comment "deny all crowdsec blacklist"
ip6 saddr @crowdsec6-blacklists counter drop comment "deny all ipv6 crowdsec blacklist" ip6 saddr @crowdsec6-blacklists counter drop comment "deny all ipv6 crowdsec blacklist"
+23 -27
View File
@@ -117,8 +117,9 @@ postgresql:
gitea: ENC[AES256_GCM,data:l+pBCzyQa3000SE9z1R4htD0V0ONsBtKy92dfgsVYsZ3XlEyVJDIBOsugwM=,iv:5t/oHW1vFAmV/s2Ze/cV9Vuqo96Qu6QvZeRbio7VX2s=,tag:4zeQaXiXIzBpy+tXsxmN7Q==,type:str] gitea: ENC[AES256_GCM,data:l+pBCzyQa3000SE9z1R4htD0V0ONsBtKy92dfgsVYsZ3XlEyVJDIBOsugwM=,iv:5t/oHW1vFAmV/s2Ze/cV9Vuqo96Qu6QvZeRbio7VX2s=,tag:4zeQaXiXIzBpy+tXsxmN7Q==,type:str]
immich: ENC[AES256_GCM,data:11jvxTKA/RL0DGL6y2/X092hnDohj6yTrYGK4IVojqBd1gCOBnDvUjgmx14=,iv:oBfHxsx9nxhyKY/WOuWfybxEX2bf+lHEtsaifFRS9lg=,tag:tAfkBdgQ8ZEkLIFcDICKDw==,type:str] immich: ENC[AES256_GCM,data:11jvxTKA/RL0DGL6y2/X092hnDohj6yTrYGK4IVojqBd1gCOBnDvUjgmx14=,iv:oBfHxsx9nxhyKY/WOuWfybxEX2bf+lHEtsaifFRS9lg=,tag:tAfkBdgQ8ZEkLIFcDICKDw==,type:str]
paperless: ENC[AES256_GCM,data:6VBrBbjVoam7SkZCSvoBTdrfkUoDghdGTiBmFLul04X/okXOHeC5zusJffY=,iv:iZumcJ3TWwZD77FzYx8THwCqC+EbnXUBrEKuPh3zgV8=,tag:u2m8SppAdxZ/duNdpuS3oQ==,type:str] paperless: ENC[AES256_GCM,data:6VBrBbjVoam7SkZCSvoBTdrfkUoDghdGTiBmFLul04X/okXOHeC5zusJffY=,iv:iZumcJ3TWwZD77FzYx8THwCqC+EbnXUBrEKuPh3zgV8=,tag:u2m8SppAdxZ/duNdpuS3oQ==,type:str]
vikunja: ENC[AES256_GCM,data:/+wQdoFPTBG2elI9kZbAVWrHZ0DhMaYr4dc+2z9QNdb3TcDS2PEia0JuSAg=,iv:MViZTyUD8YqMmxSTWCQpJ30f/KQdQGOzPlRHHsQ8lAw=,tag:zov3POno139dkMxFDpj2gg==,type:str]
affine: ENC[AES256_GCM,data:XPXrcszsV06YqCJZ7CDqc4rCwqqNlbtLCFYfLAQ8jamLtft8L2UVrMA4WZo=,iv:vrWdBeckxB9tmEE628j4jhU+hSpE6TXYMGt0hh1Cg84=,tag:hlWwWUGht8NqWTZREMsa1Q==,type:str] affine: ENC[AES256_GCM,data:XPXrcszsV06YqCJZ7CDqc4rCwqqNlbtLCFYfLAQ8jamLtft8L2UVrMA4WZo=,iv:vrWdBeckxB9tmEE628j4jhU+hSpE6TXYMGt0hh1Cg84=,tag:hlWwWUGht8NqWTZREMsa1Q==,type:str]
nextcloud: ENC[AES256_GCM,data:ROsximNuWYMTZktmLJPx7W1Qol/uT+APgwoCtFO/6ZYYc3KxKvlk344eqEc=,iv:4d+MrfIHjJKAcwhvZ3g4go66uZcieuL7lngKErJd+fg=,tag:QbWOtxeCbiu62GyrE2atXg==,type:str]
sure: ENC[AES256_GCM,data:FULJ2gjJ2gZC3s324itW+CjGRBHIP9RnOqw5TT1UaiUhb7UHAPm1na+LsZk=,iv:c0GnVZkxprJUzPPq3TCQaZvAes9QQuvDXqgVLLaiQIg=,tag:uDxy/Lkd2hNK4AWwMNMslw==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment] #ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
# #
# #
@@ -209,14 +210,6 @@ immich:
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment] #ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
# #
# #
#ENC[AES256_GCM,data:bzMt0Ox0Za4dOhoo7S6dYCdK32JI9Q==,iv:PRTryIJk0tR545XY0LoHwklvsJp5+A5bEljNmzUvRhY=,tag:EVsjRUGMOadaNbMu0Xr4XA==,type:comment]
actualbudget:
oidc:
secret: ENC[AES256_GCM,data:TE2umZ9Vvr7cSfA2+TAfRadIWZN3hyOKQ6U9NqJFm5e9iiw1avI+QlnYcKI=,iv:rUWoclBRqh0tsGnMq29395Fn2NP7AXnSCd0s+S8jQ6I=,tag:qPX/TcdIo6BJeex7wmi02Q==,type:str]
hash: ENC[AES256_GCM,data:UjhNkGj+sxbnmPUx1V5kVYwZnzsB0aEvN8YV29lcvMbSnf9xpQWwD5C93Zu8SYrnS/p88qZpGBgAjr9Pcly3y0H1YMRt9zzbHZU3Uo0DPDrSWRQdeB/8LkcM/cwMAs8arS6PO03ECNnN5Z6aTmFdFnLjUkvUuSWMFscItAzMzhWCpeY=,iv:B06LI7Cq3NN8haOLfN3gWIpUFnvdUlq6D2XmARojDpk=,tag:MflE8qcY5j/aAA7xfPCqng==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:McPUAbIUvtC1gdPaxTgAxAMCMWcLfg==,iv:Tp6idRf7he3sYzo8LW596C905JAaoTIhIoDUzSyRT0k=,tag:4mZQ0Swu1X9uuwjsRNhr2A==,type:comment] #ENC[AES256_GCM,data:McPUAbIUvtC1gdPaxTgAxAMCMWcLfg==,iv:Tp6idRf7he3sYzo8LW596C905JAaoTIhIoDUzSyRT0k=,tag:4mZQ0Swu1X9uuwjsRNhr2A==,type:comment]
paperless: paperless:
session_secret: ENC[AES256_GCM,data:siwCs2noeVpg9DCEZybnmo/oz11BdrHSTnHciMOu/6g=,iv:XVjhu10TIujIdUopN9+TVVqRade9EvItDWxym6YXnZs=,tag:TxLYm+4Bo7IMaTQBtMg9pQ==,type:str] session_secret: ENC[AES256_GCM,data:siwCs2noeVpg9DCEZybnmo/oz11BdrHSTnHciMOu/6g=,iv:XVjhu10TIujIdUopN9+TVVqRade9EvItDWxym6YXnZs=,tag:TxLYm+4Bo7IMaTQBtMg9pQ==,type:str]
@@ -228,22 +221,6 @@ paperless:
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment] #ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
# #
# #
#ENC[AES256_GCM,data:V7DJHA2JQirfBsrCGhXrhg==,iv:+jYqX9hGNnuyYj9o9LpCYFVOoD6nSrtc4t40Ag0mMzo=,tag:1wSxKtkJm42reUxdwYDvlg==,type:comment]
vikunja:
session_secret: ENC[AES256_GCM,data:CMyw8JGHyTczGsrOJJwQBKfXMU4Sudvwkur1Lgx4o64=,iv:F2VmpqddiDT4jGaGDKGl6FARsQOt3lLz3X6TjC2MIVU=,tag:UJYyzrl/FX1BNwY4ROFncA==,type:str]
oidc:
secret: ENC[AES256_GCM,data:QwqndYsfr+fh9OLkHYtLYCa6WUdhnL7A4btz1d1eelTwq3Kps5S6BUN5qZg=,iv:51N8byIAAUh4ky7YBAuEJOBEWu1d9AX5W1m37/cLlCM=,tag:GD7jbxNGd748TCPgqsxyMg==,type:str]
hash: ENC[AES256_GCM,data:ORifyT4u1V2CyBCNBgF72wwS2i05mlzA4iIVEa1cH9aaE69PdiQvGGzMHK+tmlfpVaVQEENSt1QDUSSlMyeuZT/3a0JwAvlz+XDbpS7bicL2cB6DCa4JyEd/rbGRXs0/COfxPxXzYv7jq9gd2uSJ+cCGYb/93WuEXSEI6PHi+FF7N94=,iv:FVSGySa4YB2vwenqSagBzxeIexg91ewvcQMix+etmng=,tag:yyQtOgzOZypba+rV3A1K9g==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:EsRGZP7snPchEAMoQN5PoQpiOA==,iv:A/8POGq3pIw7aX5S2vyKtI2vPqH0FT6yZnpe/vVbifw=,tag:BgUYHX2zxIL7yLS0JbI1Yg==,type:comment]
opencloud:
admin:
password: ENC[AES256_GCM,data:VKG7sNTTLHCXRGf4SAlR91+hvc7PaNrnpJX/4kItVcT9W1Hdl/yKgHHD7M8=,iv:WwWnx9KuN+i/Ugwv+HY4IGDZrLHk71hsobGFOn9kml0=,tag:SS6ihrtZjLnlAJR59lw+gw==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:k55osvepVeB1RC5hZ4IF,iv:AlhfmWwn/DiSESWc+ULJSOLUhnrKAIfWr7MeiwV8qc8=,tag:hOgptwUcY6nVxPIhu+DYgw==,type:comment] #ENC[AES256_GCM,data:k55osvepVeB1RC5hZ4IF,iv:AlhfmWwn/DiSESWc+ULJSOLUhnrKAIfWr7MeiwV8qc8=,tag:hOgptwUcY6nVxPIhu+DYgw==,type:comment]
affine: affine:
secret_key: ENC[AES256_GCM,data:LLX78DpYnha1JWhgw0sHLzIVq/oIzvT+nB7zgli4mroGbnt7WZaXCx34zKkYRwYj/+0L4IFFVdkzKtK5DO84SgFkS2Bk2iNdCMqIx80CpyiD8IWAcyRu5d6hh82PlgyxU80T/4nbLbIn0GLubPTTeUX8GC3VxRU=,iv:DnmvbhlygSHes0jAkIm4+WXMUQLzr4R4dNa33rO67v8=,tag:+2wlh+/ekiTyShWM4XBbUw==,type:str] secret_key: ENC[AES256_GCM,data:LLX78DpYnha1JWhgw0sHLzIVq/oIzvT+nB7zgli4mroGbnt7WZaXCx34zKkYRwYj/+0L4IFFVdkzKtK5DO84SgFkS2Bk2iNdCMqIx80CpyiD8IWAcyRu5d6hh82PlgyxU80T/4nbLbIn0GLubPTTeUX8GC3VxRU=,iv:DnmvbhlygSHes0jAkIm4+WXMUQLzr4R4dNa33rO67v8=,tag:+2wlh+/ekiTyShWM4XBbUw==,type:str]
@@ -255,6 +232,25 @@ affine:
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment] #ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
# #
# #
#ENC[AES256_GCM,data:PZS7EbvMHqHGorNUGAWj4dk1,iv:vOE+djRAvBTMM51kHi6kG5Arw3uPXlJt1d/BpcEaD0c=,tag:AuoCHLQz42CYvVVdKFWu1Q==,type:comment]
nextcloud:
admin-local:
password: ENC[AES256_GCM,data:mIwF5A09oqYbdK3bOKid9A896Q5J5Q6Ax+vDNqEJFGNdzd/mJ4oQS6rva+s=,iv:QroUMST2wnEJzk6DySe9tPZaWuqdxzJZ0+oi6mW6x00=,tag:3UTzjupK7+omrI3Hvyr8bA==,type:str]
oidc:
secret: ENC[AES256_GCM,data:Sr4KkKkYdkU0UWdpfUF7PyiGoerjBiw+sOFcENyLxw0FRXGG0Y8gv5uGb4Q=,iv:LbGsNM3+iY7bWFQe88TepVKUdiRQWZ+K7Ubn6ze6lV4=,tag:SbcfIAMW9ZprgahOFU4IQQ==,type:str]
hash: ENC[AES256_GCM,data:CkstbIYQmi72QhsbJZN0lQedgCn7TmGpYcYj0n+NvJIoTlol8G9N/88cwGbVoGK9nEISv54FL94cEJFppnMIuj0BHrhasrZsyI2/Lj52YLWdwNJWNQ+iYt+Ifp/1kI0zqmdoajzZ5DS2w/1evCBC1+JdfTRlpVXmSsHUIPIHelBRj90=,iv:vwvT5TTkF4woxXOvrRRqmrdLXf19s47NIDtdT+zLp0U=,tag:KC0MS0DTH6j3zIHOjCFOSA==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:Fsqc2JDp9dvfgiCjdQ==,iv:3DALKKEXaP8hzXRvxD4CgfFpOiPPsOa16OB94n8WKp8=,tag:K+FF3zGrc0YLXWK/R2L3Ow==,type:comment]
sure:
session_secret: ENC[AES256_GCM,data:InHsz/jld8E9TwI8MWpxk9x2I7dxlIsY9R6jtDK2pBA=,iv:HY5yXEC2Dce26e9/vXTIWELvVd9ZjhcCwFD0jhz5pPw=,tag:LLSJovZ0RH3CUK+se7R4Ag==,type:str]
oidc:
secret: ENC[AES256_GCM,data:9BSvpcU9BJctSN9bkPIAsRxg8JNHTWvOKdpJFhm//CUDm/Xc7oC/ANHf5no=,iv:JVQLl/rp65kZSK/4SpVXxtiac3Z35XNkxWm2+lEdq/c=,tag:WgfaORiNlrO+wHSdnl4CWQ==,type:str]
hash: ENC[AES256_GCM,data:EjJ+1fP7/9wG2jG0Jv2hxMLtErqxjHBstRjru79dd5ZXhqwT7S+jpLfl9WpZU9qi20ps9YP4qe7G08p6NJNXjYhQj852GQxEORRh/9StAZsPt3p8w+ePZSVbivPQH+FpPKWYxoH0VR7y3TnL66R0tKRLh1fNTc5jRy5rU5r1bfs1jZ0=,iv:0y9FxW4QdD7qHz3bPRWlwHFpvOsvlYhVrOItB6BzaE8=,tag:Wc7MZhP3QRYmvZcjpoEWtQ==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:T4Wtn49AAxPd2QUFTR+q,iv:bH5goGWBDqumAat9dUv2OwfCUJUpuVqncTMqMBZUXhI=,tag:G+W6hHA+yftQ+4RJpXrxHg==,type:comment] #ENC[AES256_GCM,data:T4Wtn49AAxPd2QUFTR+q,iv:bH5goGWBDqumAat9dUv2OwfCUJUpuVqncTMqMBZUXhI=,tag:G+W6hHA+yftQ+4RJpXrxHg==,type:comment]
switch: switch:
password: ENC[AES256_GCM,data:qu0f9L7A0eFq/UCpaRs=,iv:W8LLOp3MSfd/+EfNEZNf91K8GgI5eUfVPoWTRES2C0Y=,tag:Q5FlAOfwqwJwPvd7k6i+0g==,type:str] password: ENC[AES256_GCM,data:qu0f9L7A0eFq/UCpaRs=,iv:W8LLOp3MSfd/+EfNEZNf91K8GgI5eUfVPoWTRES2C0Y=,tag:Q5FlAOfwqwJwPvd7k6i+0g==,type:str]
@@ -284,7 +280,7 @@ sops:
UmliaFNxVTBqRkI1QWJpWGpTRWxETW8KEY/8AfU73UOzCGhny1cNnd5dCNv7bHXt UmliaFNxVTBqRkI1QWJpWGpTRWxETW8KEY/8AfU73UOzCGhny1cNnd5dCNv7bHXt
k+uyWPPi+enFkVaceSwMFrA66uaWWrwAj11sXEB7yzvGFPrnAGezjQ== k+uyWPPi+enFkVaceSwMFrA66uaWWrwAj11sXEB7yzvGFPrnAGezjQ==
-----END AGE ENCRYPTED FILE----- -----END AGE ENCRYPTED FILE-----
lastmodified: "2026-04-06T14:32:22Z" lastmodified: "2026-05-09T14:26:51Z"
mac: ENC[AES256_GCM,data:OFiSsBBAzOUoOwnAwhaplQQ8k2kUo+Avzk475BpaiOJoaB2c0wsJ3siP15tcLMrav4Qw8boZFo64v+rjdMoNI/MRo1EOYWNr1ZRMqHzwmQeaiMH2QcfoRZ0oLqrn5ekQztuPR9ULjDYZb63AwVGmzseUf4R5lGXgdgN5tjU/pH4=,iv:hqzDwryMuJ7JnkBazzDSznw05m7k61Sk61aPgO3JtpU=,tag:Lhhlgwy+YuQ1S0hkbsjecg==,type:str] mac: ENC[AES256_GCM,data:TYs08ZSS2kcO5lYuhQ/IySUSQ3DpL+ba3/uNLyszht4OttR110/W/WQLiRuu/Ql6FwtDtjq6I3iNpOhmCHSv1kMCam1l99GEIYCaPUIY+TY3Zw0j7518dFXe8p/DrKRwIVXfK5lIKLIEd+eizD50HzwXXJFmU+7YDkQ1Dx+55kw=,iv:arJKJ4wO4sdQlu3GZbtultsfM6s8vbhG93tnf2EjJDc=,tag:m95gUqvn4w85XI8qVvCZpQ==,type:str]
unencrypted_suffix: _unencrypted unencrypted_suffix: _unencrypted
version: 3.12.1 version: 3.12.1
@@ -19,7 +19,7 @@ Volume=%h/containers/affine/config:/root/.affine/config
Volume=%h/containers/affine/ssl:/etc/ssl/affine:ro Volume=%h/containers/affine/ssl:/etc/ssl/affine:ro
# General # General
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
## OIDC callback URIs ## OIDC callback URIs
Environment="AFFINE_SERVER_HOST={{ services['affine']['domain']['public'] }}.{{ domain['public'] }}" Environment="AFFINE_SERVER_HOST={{ services['affine']['domain']['public'] }}.{{ domain['public'] }}"
Environment="AFFINE_SERVER_EXTERNAL_URL=https://{{ services['affine']['domain']['public'] }}.{{ domain['public'] }}" Environment="AFFINE_SERVER_EXTERNAL_URL=https://{{ services['affine']['domain']['public'] }}.{{ domain['public'] }}"
@@ -0,0 +1,25 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Collabora Online
[Container]
Image=docker.io/collabora/code:{{ version['containers']['collabora'] }}
ContainerName=collabora
HostName=collabora
PublishPort={{ services['collabora']['ports']['http'] }}:9980/tcp
Environment="TZ={{ timezone }}"
Environment="aliasgroup1=https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}"
# Environment="aliasgroup2=other_server_FQDN"
Environment="extra_params=--o:ssl.enable=false --o:ssl.termination=true --o:server_name={{ services['collabora']['domain']['public'] }}.{{ domain['public'] }} --o:admin_console.enable=false"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -19,7 +19,7 @@ Volume=%h/data/containers/gitea:/data:rw
Volume=%h/containers/gitea/ssl:/etc/ssl/gitea:ro Volume=%h/containers/gitea/ssl:/etc/ssl/gitea:ro
# General # General
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Environment="GITEA__server__DISABLE_SSH=true" Environment="GITEA__server__DISABLE_SSH=true"
# Database # Database
Environment="GITEA__database__DB_TYPE=postgres" Environment="GITEA__database__DB_TYPE=postgres"
@@ -21,7 +21,7 @@ PodmanArgs=--group-add keep-groups
Volume=%h/containers/immich/ml/cache:/cache:rw Volume=%h/containers/immich/ml/cache:/cache:rw
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
[Service] [Service]
Restart=always Restart=always
@@ -24,7 +24,7 @@ Volume=%h/data/containers/immich:/data:rw
Volume=%h/containers/immich/ssl:/etc/ssl/immich:ro Volume=%h/containers/immich/ssl:/etc/ssl/immich:ro
# Environment # Environment
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
# The new environment from version 2.7.0 to enable CSP # The new environment from version 2.7.0 to enable CSP
Environment="IMMICH_HELMET_FILE=true" Environment="IMMICH_HELMET_FILE=true"
@@ -14,7 +14,7 @@ PublishPort={{ services[manticore_service]['ports']['manticore'] }}:9308
Volume=%h/data/containers/manticore/{{ manticore_service }}:/var/lib/manticore:rw Volume=%h/data/containers/manticore/{{ manticore_service }}:/var/lib/manticore:rw
# General # General
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
[Service] [Service]
Restart=always Restart=always
@@ -0,0 +1,5 @@
<?php
$CONFIG = [
// Background jobs mode is auto-detected as 'cron' when nextcloud-cron.timer runs cron.php via CLI. No explicit config required.
'maintenance_window_start' => 18,
];
@@ -0,0 +1,12 @@
<?php
$CONFIG = [
'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
'host' => 'host.containers.internal',
'port' => {{ services['nextcloud']['ports']['redis'] }},
'timeout' => 1.5,
'dbindex' => 0,
],
];
@@ -0,0 +1,9 @@
<?php
$CONFIG = [
'trusted_domains' => [
'{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
],
'overwritehost' => '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
'overwriteprotocol' => 'https',
'overwrite.cli.url' => 'https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
];
@@ -0,0 +1,4 @@
<?php
$CONFIG = [
'allow_local_remote_servers' => true,
];
@@ -0,0 +1,9 @@
<?php
$CONFIG = [
'user_oidc' => [
'default_token_endpoint_auth_method' => 'client_secret_post',
'auto_provision' => true,
'soft_auto_provision' => true,
'disable_account_creation' => false,
],
];
@@ -0,0 +1,14 @@
; /usr/local/etc/php/conf.d/opcache-recommended.ini
; OPcache tuning
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=32
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.revalidate_freq=60
opcache.fast_shutdown=1
; APCu CLI activate
apc.enable_cli=1
@@ -0,0 +1,6 @@
; /usr/local/etc/php/conf.d/nextcloud-upload.ini
upload_max_filesize=16G
post_max_size=16G
memory_limit=1024M
max_execution_time=3600
max_input_time=3600
@@ -0,0 +1,36 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Nextcloud
[Container]
Image=docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}
ContainerName=nextcloud
HostName=nextcloud
PublishPort={{ services['nextcloud']['ports']['http'] }}:80
Volume=%h/containers/nextcloud/ssl:/etc/ssl/nextcloud:ro
Volume=%h/containers/nextcloud/ini/opcache.ini:/usr/local/etc/php/conf.d/opcache-recommended.ini:ro
Volume=%h/containers/nextcloud/ini/upload.ini:/usr/local/etc/php/conf.d/upload.ini:ro
Volume=%h/data/containers/nextcloud/html:/var/www/html:rw
# General
Environment="TZ={{ timezone }}"
# PostgreSQL
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/nextcloud/{{ root_cert_filename }}"
## libpq in Nextcloud automatically tries to use a client certificate for mTLS. Therefore, when only TLS is required, then disable the option explicitly.
Environment="PGSSLCERTMODE=disable"
# Redis
Environment="REDIS_HOST=host.containers.internal"
Environment="REDIS_HOST_PORT={{ services['nextcloud']['ports']['redis'] }}"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,8 @@
[Unit]
Description=Nextcloud cron.php
Requires=nextcloud.service
After=nextcloud.service
[Service]
Type=oneshot
ExecStart=/usr/bin/podman exec -u www-data nextcloud php -f /var/www/html/cron.php
@@ -0,0 +1,10 @@
[Unit]
Description=Run Nextcloud cron every 5 minutes
[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloud-cron.service
[Install]
WantedBy=timers.target
@@ -20,8 +20,8 @@ Volume=%h/data/containers/paperless/consume:/usr/src/paperless/consume:rw
Volume=%h/containers/paperless/ssl:/etc/ssl/paperless:ro Volume=%h/containers/paperless/ssl:/etc/ssl/paperless:ro
# General # General
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Environment="PAPERLESS_TIME_ZONE=Asia/Seoul" Environment="PAPERLESS_TIME_ZONE={{ timezone }}"
Environment="PAPERLESS_URL=https://{{ services['paperless']['domain']['public'] }}.{{ domain['public'] }}" Environment="PAPERLESS_URL=https://{{ services['paperless']['domain']['public'] }}.{{ domain['public'] }}"
Environment="PAPERLESS_OCR_LANGUAGE=kor+eng" Environment="PAPERLESS_OCR_LANGUAGE=kor+eng"
Environment="PAPERLESS_OCR_LANGUAGES=kor" Environment="PAPERLESS_OCR_LANGUAGES=kor"
@@ -20,7 +20,7 @@ Volume=%h/containers/redis/{{ redis_service }}/redis.conf:/usr/local/etc/redis/r
Exec=redis-server /usr/local/etc/redis/redis.conf Exec=redis-server /usr/local/etc/redis/redis.conf
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
[Service] [Service]
Restart=always Restart=always
@@ -0,0 +1,67 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Sure Web
After=network-online.target redis_sure.service
Wants=network-online.target redis_sure.service
[Container]
Image=ghcr.io/we-promise/sure:{{ version['containers']['sure'] }}
ContainerName=sure-web
HostName=sure-web
PublishPort={{ services['sure']['ports']['http'] }}:3000/tcp
Volume=%h/data/containers/sure/storage:/rails/storage:rw
Volume=%h/containers/sure/ssl:/etc/ssl/sure:ro
# General
Environment="TZ={{ timezone }}"
Environment="SELF_HOSTED=true"
Environment="ONBOARDING_STATE=closed"
Environment="RAILS_FORCE_SSL=false"
Environment="RAILS_ASSUME_SSL=true"
Environment="APP_DOMAIN={{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
Secret=SURE_SECRET_KEY_BASE,type=env,target=SECRET_KEY_BASE
# PostgreSQL
Environment="POSTGRES_USER=sure"
Environment="POSTGRES_DB=sure_db"
Environment="DB_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="DB_PORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/sure/{{ root_cert_filename }}"
Secret=SURE_POSTGRES_PASSWORD,type=env,target=POSTGRES_PASSWORD
# Redis
Environment="REDIS_URL=redis://host.containers.internal:{{ services['sure']['ports']['redis'] }}/1"
# OIDC - Authelia
Environment="OIDC_CLIENT_ID=sure"
Environment="OIDC_ISSUER=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="OIDC_REDIRECT_URI=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}/auth/openid_connect/callback"
Secret=SURE_OIDC_CLIENT_SECRET,type=env,target=OIDC_CLIENT_SECRET
Environment="OIDC_BUTTON_LABEL=Sign in with Authelia"
Environment="AUTH_JIT_MODE=create_and_link"
# email's domain, e.g. ilnmors.internal then only user@ilnmors.internal is allowed to sign-up
Environment="ALLOWED_OIDC_DOMAINS="
# WebAuthn / Passkey
Environment="WEBAUTHN_RP_ID={{ domain['public'] }}"
Environment="WEBAUTHN_ALLOWED_ORIGINS=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
# Provider
## Currency
Environment="EXCHANGE_RATE_PROVIDER=yahoo_finance"
Environment="SECURITIES_PROVIDER=yahoo_finance"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,67 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Sure Worker
After=network-online.target redis_sure.service
Wants=network-online.target redis_sure.service
[Container]
Image=ghcr.io/we-promise/sure:{{ version['containers']['sure'] }}
ContainerName=sure-worker
HostName=sure-worker
Volume=%h/data/containers/sure/storage:/rails/storage:rw
Volume=%h/containers/sure/ssl:/etc/ssl/sure:ro
Exec=bundle exec sidekiq
# General
Environment="TZ={{ timezone }}"
Environment="SELF_HOSTED=true"
Environment="ONBOARDING_STATE=closed"
Environment="RAILS_FORCE_SSL=false"
Environment="RAILS_ASSUME_SSL=true"
Environment="APP_DOMAIN={{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
Secret=SURE_SECRET_KEY_BASE,type=env,target=SECRET_KEY_BASE
# PostgreSQL
Environment="POSTGRES_USER=sure"
Environment="POSTGRES_DB=sure_db"
Environment="DB_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="DB_PORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/sure/{{ root_cert_filename }}"
Secret=SURE_POSTGRES_PASSWORD,type=env,target=POSTGRES_PASSWORD
# Redis
Environment="REDIS_URL=redis://host.containers.internal:{{ services['sure']['ports']['redis'] }}/1"
# OIDC - Authelia
Environment="OIDC_CLIENT_ID=sure"
Environment="OIDC_ISSUER=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="OIDC_REDIRECT_URI=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}/auth/openid_connect/callback"
Secret=SURE_OIDC_CLIENT_SECRET,type=env,target=OIDC_CLIENT_SECRET
Environment="OIDC_BUTTON_LABEL=Sign in with Authelia"
Environment="AUTH_JIT_MODE=create_and_link"
# email's domain, e.g. ilnmors.internal then only user@ilnmors.internal is allowed to sign-up
Environment="ALLOWED_OIDC_DOMAINS="
# WebAuthn / Passkey
Environment="WEBAUTHN_RP_ID={{ domain['public'] }}"
Environment="WEBAUTHN_ALLOWED_ORIGINS=https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}"
# Provider
## Currency
Environment="EXCHANGE_RATE_PROVIDER=yahoo_finance"
Environment="SECURITIES_PROVIDER=yahoo_finance"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -18,7 +18,7 @@ PublishPort={{ services['vaultwarden']['ports']['http'] }}:80/tcp
Volume=%h/data/containers/vaultwarden:/data:rw Volume=%h/data/containers/vaultwarden:/data:rw
Volume=%h/containers/vaultwarden/ssl:/etc/ssl/vaultwarden:ro Volume=%h/containers/vaultwarden/ssl:/etc/ssl/vaultwarden:ro
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Environment="DOMAIN=https://{{ services['vaultwarden']['domain']['public'] }}.{{ domain['public'] }}" Environment="DOMAIN=https://{{ services['vaultwarden']['domain']['public'] }}.{{ domain['public'] }}"
Environment="SIGNUPS_ALLOWED=false" Environment="SIGNUPS_ALLOWED=false"
Secret=VW_ADMIN_TOKEN,type=env,target=ADMIN_TOKEN Secret=VW_ADMIN_TOKEN,type=env,target=ADMIN_TOKEN
@@ -22,7 +22,7 @@ Volume=%h/containers/authelia/config:/config:rw
Volume=%h/containers/authelia/certs:/etc/ssl/authelia:ro Volume=%h/containers/authelia/certs:/etc/ssl/authelia:ro
# Default # Default
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
# Enable Go template engine # Enable Go template engine
# !CAUTION! # !CAUTION!
{% raw %}# If this environment were enabled, you would have to use {{/* ... /*}} for {{ go_filter }} options. Go engine always processes its own grammar first. {% raw %}# If this environment were enabled, you would have to use {{/* ... /*}} for {{ go_filter }} options. Go engine always processes its own grammar first.
@@ -93,17 +93,6 @@ notifier:
identity_providers: identity_providers:
oidc: oidc:
hmac_secret: '' # $AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE hmac_secret: '' # $AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
# For the app which doesn't use secret.
cors:
endpoints:
- 'authorization'
- 'token'
- 'revocation'
- 'introspection'
- 'userinfo'
allowed_origins:
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}'
allowed_origins_from_client_redirect_uris: true
jwks:{% raw %} jwks:{% raw %}
- algorithm: 'RS256' - algorithm: 'RS256'
use: 'sig' use: 'sig'
@@ -184,28 +173,6 @@ identity_providers:
access_token_signed_response_alg: 'none' access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none' userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post' token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/actual-budget/
- client_id: 'actual-budget'
client_name: 'Actual Budget'
client_secret: '{{ hostvars['console']['actualbudget']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}/openid/callback'
scopes:
- 'openid'
- 'profile'
- 'groups'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
# https://www.authelia.com/integration/openid-connect/clients/paperless/ # https://www.authelia.com/integration/openid-connect/clients/paperless/
- client_id: 'paperless' - client_id: 'paperless'
client_name: 'Paperless' client_name: 'Paperless'
@@ -228,122 +195,6 @@ identity_providers:
access_token_signed_response_alg: 'none' access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none' userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post' token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/vikunja/
- client_id: 'vikunja'
client_name: 'Vikunja'
client_secret: '{{ hostvars['console']['vikunja']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://{{ services['vikunja']['domain']['public'] }}.{{ domain['public'] }}/auth/openid/authelia'
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
# OpenCloud configuration
## https://docs.opencloud.eu/docs/admin/configuration/authentication-and-user-management/external-idp/
## Web
- client_id: 'opencloud'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}/'
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}/oidc-callback.html'
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}/oidc-silent-redirect.html'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## desktop
- client_id: 'OpenCloudDesktop'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'http://localhost'
- 'http://127.0.0.1'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## Android
- client_id: 'OpenCloudAndroid'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'oc://android.opencloud.eu'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## IOS
- client_id: 'OpenCloudIOS'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'oc://ios.opencloud.eu'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
# https://docs.affine.pro/self-host-affine/administer/oauth-2-0 # https://docs.affine.pro/self-host-affine/administer/oauth-2-0
- client_id: 'affine' - client_id: 'affine'
client_name: 'Affine' client_name: 'Affine'
@@ -365,3 +216,47 @@ identity_providers:
access_token_signed_response_alg: 'none' access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none' userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post' token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/nextcloud/#openid-connect-user-backend-app
- client_id: 'nextcloud'
client_name: 'Nextcloud'
client_secret: '{{ hostvars['console']['nextcloud']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}/apps/user_oidc/code'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/sure/
- client_id: 'sure'
client_name: 'Sure'
client_secret: '{{ hostvars['console']['sure']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['sure']['domain']['public'] }}.{{ domain['public'] }}/auth/openid_connect/callback'
scopes:
- 'openid'
- 'email'
- 'profile'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
@@ -33,7 +33,7 @@ Volume=%h/containers/caddy/data:/data:rw
Volume=/var/log/caddy:/log:rw Volume=/var/log/caddy:/log:rw
{% endif %} {% endif %}
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Secret=CADDY_ACME_KEY,target=/run/secrets/CADDY_ACME_KEY Secret=CADDY_ACME_KEY,target=/run/secrets/CADDY_ACME_KEY
{% if node['name'] == 'auth' %} {% if node['name'] == 'auth' %}
@@ -47,33 +47,33 @@
header_up Host {http.request.header.X-Forwarded-Host} header_up Host {http.request.header.X-Forwarded-Host}
} }
} }
{{ services['actualbudget']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['actualbudget']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['paperless']['domain']['internal'] }}.{{ domain['internal'] }} { {{ services['paperless']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls import private_tls
reverse_proxy host.containers.internal:{{ services['paperless']['ports']['http'] }} { reverse_proxy host.containers.internal:{{ services['paperless']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host} header_up Host {http.request.header.X-Forwarded-Host}
} }
} }
{{ services['vikunja']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['vikunja']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['opencloud']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['opencloud']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['affine']['domain']['internal'] }}.{{ domain['internal'] }} { {{ services['affine']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls import private_tls
reverse_proxy host.containers.internal:{{ services['affine']['ports']['http'] }} { reverse_proxy host.containers.internal:{{ services['affine']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host} header_up Host {http.request.header.X-Forwarded-Host}
} }
} }
{{ services['nextcloud']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['nextcloud']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['collabora']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['collabora']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['sure']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['sure']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
@@ -91,15 +91,6 @@
} }
} }
} }
{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{ services['actualbudget']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['paperless']['domain']['public'] }}.{{ domain['public'] }} { {{ services['paperless']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log import crowdsec_log
route { route {
@@ -109,24 +100,6 @@
} }
} }
} }
{{ services['vikunja']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{ services['vikunja']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{ services['opencloud']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['affine']['domain']['public'] }}.{{ domain['public'] }} { {{ services['affine']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log import crowdsec_log
route { route {
@@ -136,6 +109,33 @@
} }
} }
} }
{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['nextcloud']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['collabora']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['collabora']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['sure']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['sure']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
# Internal domain # Internal domain
{{ node['name'] }}.{{ domain['internal'] }} { {{ node['name'] }}.{{ domain['internal'] }} {
@@ -21,7 +21,7 @@ Volume=%h/containers/ca/config:/home/step/config:rw
Volume=%h/containers/ca/db:/home/step/db:rw Volume=%h/containers/ca/db:/home/step/db:rw
Volume=%h/containers/ca/templates:/home/step/templates:rw Volume=%h/containers/ca/templates:/home/step/templates:rw
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
# Since 0.30.0, Docker CMD no longer expands PWDPATH. # Since 0.30.0, Docker CMD no longer expands PWDPATH.
#Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD" #Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD"
@@ -24,7 +24,7 @@ Volume=%h/containers/grafana/data:/var/lib/grafana:rw
Volume=%h/containers/grafana/etc:/etc/grafana:ro Volume=%h/containers/grafana/etc:/etc/grafana:ro
Volume=%h/containers/grafana/ssl:/etc/ssl/grafana:ro Volume=%h/containers/grafana/ssl:/etc/ssl/grafana:ro
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Environment="GF_PATHS_CONFIG=/etc/grafana/grafana.ini" Environment="GF_PATHS_CONFIG=/etc/grafana/grafana.ini"
# plugin # plugin
# Environment="GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" # Environment="GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource"
@@ -24,7 +24,7 @@ Volume=%h/containers/ldap/data:/data:rw
Volume=%h/containers/ldap/ssl:/etc/ssl/ldap:ro Volume=%h/containers/ldap/ssl:/etc/ssl/ldap:ro
# Default # Default
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
# Domain # Domain
Environment="LLDAP_LDAP_BASE_DN={{ domain['dc'] }}" Environment="LLDAP_LDAP_BASE_DN={{ domain['dc'] }}"
@@ -19,7 +19,7 @@ Volume=%h/containers/loki/data:/loki:rw
Volume=%h/containers/loki/etc:/etc/loki:ro Volume=%h/containers/loki/etc:/etc/loki:ro
Volume=%h/containers/loki/ssl:/etc/ssl/loki:ro Volume=%h/containers/loki/ssl:/etc/ssl/loki:ro
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Exec=--config.file=/etc/loki/loki.yaml Exec=--config.file=/etc/loki/loki.yaml
@@ -21,7 +21,7 @@ Volume=%h/containers/postgresql/ssl:/etc/ssl/postgresql:ro
Volume=%h/containers/postgresql/init:/docker-entrypoint-initdb.d/:ro Volume=%h/containers/postgresql/init:/docker-entrypoint-initdb.d/:ro
Volume=%h/containers/postgresql/backups:/backups:rw Volume=%h/containers/postgresql/backups:/backups:rw
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
# This option is only for init process, after init custom config file `pg_hba.conf` will control this option. # This option is only for init process, after init custom config file `pg_hba.conf` will control this option.
Environment="POSTGRES_HOST_AUTH_METHOD=trust" Environment="POSTGRES_HOST_AUTH_METHOD=trust"
@@ -19,7 +19,7 @@ Volume=%h/containers/prometheus/data:/prometheus:rw
Volume=%h/containers/prometheus/etc:/etc/prometheus:ro Volume=%h/containers/prometheus/etc:/etc/prometheus:ro
Volume=%h/containers/prometheus/ssl:/etc/ssl/prometheus:ro Volume=%h/containers/prometheus/ssl:/etc/ssl/prometheus:ro
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Exec=--config.file=/etc/prometheus/prometheus.yaml \ Exec=--config.file=/etc/prometheus/prometheus.yaml \
--web.config.file=/etc/prometheus/web-config.yaml \ --web.config.file=/etc/prometheus/web-config.yaml \
@@ -0,0 +1,11 @@
server:
listen: :9793
sources:
- kind: file
name: homelab-certs
paths:
- /certs/*.crt
- /certs/*.pem
- /certs/*.cer
refreshInterval: 1m
@@ -11,11 +11,12 @@ Image=docker.io/enix/x509-certificate-exporter:{{ version['containers']['x509-ex
ContainerName=x509-exporter ContainerName=x509-exporter
HostName=X509-exporter HostName=X509-exporter
Volume=%h/containers/x509-exporter/config/config.yaml:/etc/config.yaml:ro
Volume=%h/containers/x509-exporter/certs:/certs:ro Volume=%h/containers/x509-exporter/certs:/certs:ro
PublishPort={{ services['x509-exporter']['ports']['http'] }}:9793 PublishPort={{ services['x509-exporter']['ports']['http'] }}:9793
Exec=--listen-address :9793 --watch-dir=/certs Exec=--config /etc/config.yaml
[Service] [Service]
Restart=always Restart=always
@@ -0,0 +1,10 @@
[Unit]
Description=BTRFS auto scrub
ConditionPathIsMountPoint={{ storage['btrfs']['mount_point'] }}
RequiresMountsFor={{ storage['btrfs']['mount_point'] }}
[Service]
Type=oneshot
ExecStart=/usr/bin/btrfs scrub start -Bd {{ storage['btrfs']['mount_point'] }}
Nice=19
IOSchedulingClass=idle
@@ -0,0 +1,10 @@
[Unit]
Description=Monthly BTRFS auto scrub
[Timer]
OnCalendar=*-*-01 04:00:00
Persistent=true
RandomizedDelaySec=300
[Install]
WantedBy=timers.target
@@ -12,10 +12,8 @@ whitelist:
- "{{ hostvars['fw']['network6']['console']['wg'] }}" - "{{ hostvars['fw']['network6']['console']['wg'] }}"
{% if node['name'] == 'auth' %} {% if node['name'] == 'auth' %}
expression: expression:
# budget local-first sql scrap rule
- "evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/data/migrations/'"
# immich thumbnail request 404 error false positive # immich thumbnail request 404 error false positive
- "evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'" - "evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'"
# opencloud chunk request false positive # nextcloud thumbnail/preview request error false positive
- "evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/js/chunks/'" - "evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status == '404' && evt.Meta.http_verb == 'GET' && evt.Meta.http_path startsWith '/index.php/core/preview?'"
{% endif %} {% endif %}
@@ -21,9 +21,9 @@ ProtectHome=tmpfs
InaccessiblePaths=/boot /root InaccessiblePaths=/boot /root
{% if node['name'] == 'infra' %} {% if node['name'] == 'infra' %}
BindReadOnlyPaths=%h/containers/postgresql/backups BindReadOnlyPaths={{ node['home_path'] }}/containers/postgresql/backups
{% elif node['name'] == 'app' %} {% elif node['name'] == 'app' %}
BindReadOnlyPaths=%h/data BindReadOnlyPaths={{ node['home_path'] }}/data
{% endif %} {% endif %}
# In root namescope, %u always bring 0 # In root namescope, %u always bring 0
BindPaths=/etc/kopia BindPaths=/etc/kopia
@@ -38,10 +38,10 @@ ExecStartPre=/usr/bin/kopia repository connect server \
{% if node['name'] == 'infra' %} {% if node['name'] == 'infra' %}
ExecStart=/usr/bin/kopia snapshot create \ ExecStart=/usr/bin/kopia snapshot create \
/home/infra/containers/postgresql/backups {{ node['home_path'] }}/containers/postgresql/backups
{% elif node['name'] == 'app' %} {% elif node['name'] == 'app' %}
ExecStart=/usr/bin/kopia snapshot create \ ExecStart=/usr/bin/kopia snapshot create \
/home/app/data {{ node['home_path'] }}/data
{% endif %} {% endif %}
View File
+7 -7
View File
@@ -14,22 +14,22 @@
## Context ## Context
- Maintaining multi nodes requires a huge amount of resources, including hardware, electricity, even administrative efforts - Maintaining multi nodes requires a huge amount of resources, including hardware, electricity, even administrative efforts
- All units which responsible for a single role should follow the Principle of Least Privilege \(PoLP\). - All units which responsible for a single role should follow the Principle of Least Privilege (PoLP).
- All units should be interchangeable on standard to avoid vendor lock-in. - All units should be interchangeable on standard to avoid vendor lock-in.
## Consideration ## Consideration
### Hypervisor ### Hypervisor
- Proxmox Virutal Environment \(PVE\) - Proxmox Virutal Environment (PVE)
- Based on Debian. - Based on Debian.
- PVE uses `qm` command which is not a standard to implement the virtual environment. - PVE uses `qm` command which is not a standard to implement the virtual environment.
- VMware ESXi - VMware ESXi
- Based on UNIX, deveoped by VMware \(Licence is not free\) - Based on UNIX, deveoped by VMware (Licence is not free)
- Hyper-V - Hyper-V
- Based on Microsoft Windows \(Licence is not free\) - Based on Microsoft Windows (Licence is not free)
- Debian Stable - Debian Stable
- Based on standard linux \(conservative\) - Based on standard linux (conservative)
- Standard virtualization technology 'Libvirt, QEMU, KVM' - Standard virtualization technology 'Libvirt, QEMU, KVM'
### Container ### Container
@@ -37,7 +37,7 @@
- Docker - Docker
- Daemon is used to run containers - Daemon is used to run containers
- Root authority required - Root authority required
- Socket and network problem is complex \(Docker bridge\) - Socket and network problem is complex (Docker bridge)
- docker-compose is an orchestration tool - docker-compose is an orchestration tool
- Rootless Podman - Rootless Podman
- Daemonless design - Daemonless design
@@ -58,7 +58,7 @@
## Decisions ## Decisions
- Use Libvirt/KVM/QEMU on pure linux \(Debian stable\). - Use Libvirt/KVM/QEMU on pure linux (Debian stable).
- Separate all services by VM, and podman rootless containers without K3S. - Separate all services by VM, and podman rootless containers without K3S.
- Orchestration stack is not needed in single node system - Orchestration stack is not needed in single node system
- Services will be defined by Quadelt to integrate into systemd and to manage them declaratively - Services will be defined by Quadelt to integrate into systemd and to manage them declaratively
+5 -5
View File
@@ -23,15 +23,15 @@
- OPNSense/pfSense - OPNSense/pfSense
- vendor lock-in - vendor lock-in
- GUI environment \(WebGUI\) can contain vulnerability - GUI environment (WebGUI) can contain vulnerability
- It is hard to manage configurations by IaC - It is hard to manage configurations by IaC
- iptables - iptables
- Previous standard of Linux - Previous standard of Linux
- IPv4 and IPv6 configuration is separated \(no inet\) - IPv4 and IPv6 configuration is separated (no inet)
- nftables - nftables
- New standard of Linux - New standard of Linux
- English grammar friendly - English grammar friendly
- IPv4 and IPv6 configuration can be set on the same table \(inet\) - IPv4 and IPv6 configuration can be set on the same table (inet)
### Flat network structure ### Flat network structure
- LAN only - LAN only
@@ -48,8 +48,8 @@
- VLAN 20: user (DHCP allocated devices) - VLAN 20: user (DHCP allocated devices)
- wg0: VPN connections - wg0: VPN connections
- Manage the rules based on roles fundamentally, furthermore manage them based on ip and ports when it is needed - Manage the rules based on roles fundamentally, furthermore manage them based on ip and ports when it is needed
- All L3 communication which needs to pass gateway should be on control of firewall \(fw\) - All L3 communication which needs to pass gateway should be on control of firewall (fw)
- All nodes including firewall uses nftables \(modern standard\) to manage the packets based on zone concept - All nodes including firewall uses nftables (modern standard) to manage the packets based on zone concept
- IPv6 has two track strategy - IPv6 has two track strategy
- Client and server, wg nodes has static ULA IP, and use NAT66 for permanency - Client and server, wg nodes has static ULA IP, and use NAT66 for permanency
- User nodes has GUA SLAAC IP from ISP for compatibility - User nodes has GUA SLAAC IP from ISP for compatibility
+7 -7
View File
@@ -24,7 +24,7 @@
### Automate protocol ### Automate protocol
- JWK/JWT provisioner - JWK/JWT provisioner
- It is hard to manage pre-shared secret values than ACME \(Especially nsupdate\) - It is hard to manage pre-shared secret values than ACME (Especially nsupdate)
- authorized_keys - authorized_keys
- When the nodes are increased, it is hard to manage authorized_key. - When the nodes are increased, it is hard to manage authorized_key.
- SSH ca.pub allow all the certificates signed by ca key, so it is not needed to manage authroized_keys from each hosts. - SSH ca.pub allow all the certificates signed by ca key, so it is not needed to manage authroized_keys from each hosts.
@@ -39,19 +39,19 @@
## Decisions ## Decisions
- Operate private CA - Operate private CA
- Root CA \(Store on coldstorage\) - 10 years - Root CA (Store on coldstorage) - 10 years
- Intermediate CA \(Online server as Step-CA\) - 5 years - Intermediate CA (Online server as Step-CA) - 5 years
- SSH CA - No period - SSH CA - No period
- Manage certificates with two track - Manage certificates with two track
- ACME with nsupdate \(using private DNS\) for web services via Caddy - 90 days - ACME with nsupdate (using private DNS) for web services via Caddy - 90 days
- Manual issuing and managing leaf certificate for infra services for independency - 2.5 years - Manual issuing and managing leaf certificate for infra services for independency - 2.5 years
- All manual issuing leaf certificate expiry date is observed by x509-exporter on infra vm - All manual issuing leaf certificate expiry date is observed by x509-exporter on infra vm
- Manage SSH certificates - Manage SSH certificates
- *-cert.pub for host \(with -h options\) - *-cert.pub for host (with -h options)
- *-cert.pub for client \(without -h options\) - *-cert.pub for client (without -h options)
## Consequences ## Consequences
- Private PKI is operated - Private PKI is operated
- Private SSH CA is operated - Private SSH CA is operated
- All external/internal communication is encrypted as TLS re-encryption. \(E2EE\) - All external/internal communication is encrypted as TLS re-encryption. (E2EE)
+4 -4
View File
@@ -12,9 +12,9 @@
## Context ## Context
- Private authoritative DNS is required to use private reserved root domain \(.internal\) - Private authoritative DNS is required to use private reserved root domain (.internal)
- Split horizon DNS needs DNS resolver, because authoritative DNS must not send queries to other DNS. - Split horizon DNS needs DNS resolver, because authoritative DNS must not send queries to other DNS.
- Automatical issuing certificates needs private authoritative DNS which supports nsupdate \(RFC 2136\) - Automatical issuing certificates needs private authoritative DNS which supports nsupdate (RFC 2136)
## Consideration ## Consideration
@@ -22,13 +22,13 @@
- AdGuard Home - AdGuard Home
- More powerful query routing than blocky - More powerful query routing than blocky
- Web UI dependency - Web UI dependency
- Extra function which is not useful \(DHCP, etc ..\) - Extra function which is not useful (DHCP, etc ..)
- Unbound DNS - Unbound DNS
- Cache and forward zone management is powerful - Cache and forward zone management is powerful
- more complex than blocky - more complex than blocky
- cache function is not that needed in this environment - cache function is not that needed in this environment
- Internal authoritative DNS only takes charge of internal communication - Internal authoritative DNS only takes charge of internal communication
- All security function is delegated to public DNS like cloudflare \(DNSSEC, etc\) - All security function is delegated to public DNS like cloudflare (DNSSEC, etc)
## Decisions ## Decisions
+3 -3
View File
@@ -29,8 +29,8 @@
- Regex based log parsing is less structured than CrowdSec's parser/scenario model - Regex based log parsing is less structured than CrowdSec's parser/scenario model
- Crowdsec - Crowdsec
- Community based rules and sinario \(CAPI\) - Community based rules and sinario (CAPI)
- Prevention based on local machines and parsers \(LAPI\) - Prevention based on local machines and parsers (LAPI)
- Bouncers can use nftables to prevent threats - Bouncers can use nftables to prevent threats
- Parser can detect even L7 attack under TLS - Parser can detect even L7 attack under TLS
@@ -43,7 +43,7 @@
- Operate Crowdsec as IPS - Operate Crowdsec as IPS
- CrowdSec uses two API server, CAPI, LAPI. - CrowdSec uses two API server, CAPI, LAPI.
- CAPI updates malicious IPs based on community decisions - CAPI updates malicious IPs based on community decisions
- LAPI decides malicious attack based on log from its parser and scenario \(Suricata, caddy, etc\) - LAPI decides malicious attack based on log from its parser and scenario (Suricata, caddy, etc)
- When CAPI, and LAPI decides block some IP based on log parsed by parser and scenarios, bouncer block the malicious accesses. - When CAPI, and LAPI decides block some IP based on log parsed by parser and scenarios, bouncer block the malicious accesses.
- Crowdsec register blacklist on nftables or iptables. - Crowdsec register blacklist on nftables or iptables.
+3 -3
View File
@@ -20,7 +20,7 @@
- HashiCorp Vault or Infisical - HashiCorp Vault or Infisical
- Very powerful, but introduces significant compute/memory overhead. - Very powerful, but introduces significant compute/memory overhead.
- Creates a "Secret Zero" problem for a single-node homelab environment because of dependency \(DB, or etc\). - Creates a "Secret Zero" problem for a single-node homelab environment because of dependency (DB, or etc).
- It is hard to operate hardware separated key servers. - It is hard to operate hardware separated key servers.
### Systemd-credential ### Systemd-credential
@@ -37,10 +37,10 @@
## Decisions ## Decisions
- All secret data which has yaml format is encrypted by sops with age-key in `secret.yaml`. - All secret data which has yaml format is encrypted by sops with age-key in `secret.yaml`.
- age-key is encrypted by gpg and ansible vault with master key \(including upper, lower case, number, special letters) above 40 characters. - age-key is encrypted by gpg and ansible vault with master key (including upper, lower case, number, special letters) above 40 characters.
- All secret data always decrypt by `edit_secret.sh` script or ansible tasks from secrets.yaml using age-key encrypted by ansible-vault. - All secret data always decrypt by `edit_secret.sh` script or ansible tasks from secrets.yaml using age-key encrypted by ansible-vault.
- decrypted secret data is always processed on ramfs, they are never saved on disk. - decrypted secret data is always processed on ramfs, they are never saved on disk.
- Master key is never saved on disk, but only cold storage \(USB, M-DISC, operators' memory\) - Master key is never saved on disk, but only cold storage (USB, M-DISC, operators' memory)
- The secret data will be saved on each servers specific directory or podman secret. - The secret data will be saved on each servers specific directory or podman secret.
- OS: - OS:
- path: /etc/secrets - path: /etc/secrets
+23 -7
View File
@@ -6,7 +6,10 @@
- First documentation - First documentation
- Feb/27/2026 - Feb/27/2026
- Status changed from Deffered to Accepted - Status changed from Deferred to Accepted
- May/06/2026
- Add backup checking rules
## Status ## Status
@@ -14,7 +17,7 @@
## Context ## Context
- All configuration file is managed by git \(IaC\) - All configuration file is managed by git (IaC)
- All data file should be backed up by kopia - All data file should be backed up by kopia
- All backup should follow 3-2-1 backup cycle - All backup should follow 3-2-1 backup cycle
@@ -30,20 +33,26 @@
- Backing up the `/var/lib/postgresql` directory directly while the DB is running can lead to severe data corruption and inconsistency. - Backing up the `/var/lib/postgresql` directory directly while the DB is running can lead to severe data corruption and inconsistency.
- Logical dumps (`pg_dump`) are much safer, database-agnostic, and easier to restore in a homelab environment. - Logical dumps (`pg_dump`) are much safer, database-agnostic, and easier to restore in a homelab environment.
### Silent failure problem
- May/06/2026, the fact that backups haven't run since commit '9f236b6fa5' because of '%h' in system service unit.
- Operator couldn't realize backup doesn't run because the system service was failed silently.
- Therefore, set the checking rule.
## Decisions ## Decisions
- All configuration files are managed by Git - All configuration files are managed by Git
- Configuration files are based on text - Configuration files are based on text
- It is necessary to version, history management. - It is necessary to version, history management.
- Local git -> private Gitea -> github private project \(mirrored\) - Local git -> private Gitea -> github private project (mirrored)
- This fulfills 3-2-1 backup rules - This fulfills 3-2-1 backup rules
- Data files are managed by Kopia and DSM - Data files are managed by Kopia and DSM
- Local storage - kopia -> DSM's Kopia repository server - CloudSync -> Cloud server such as OneDrive or Google Drive - Local storage - kopia -> DSM's Kopia repository server - CloudSync -> Cloud server such as OneDrive or Google Drive
- This fulfills 3-2-1 backup rules - This fulfills 3-2-1 backup rules
- Data files which needs backup - Data files which needs backup
- DB data files: dump - DB data files: dump
- DB data files are located on infra:/home/infra/containers/postgresql/backups/\{cluster,$service\}/ - DB data files are located on infra:/home/infra/containers/postgresql/backups/{cluster,$service}/
- App data files: Photos, Media, etc .. - App data files: Photos, Media, etc ..
- App data files are located on app:/home/app/data/ - App data files are located on app:/home/app/data/
- Backed up files: kopia - Backed up files: kopia
@@ -51,11 +60,18 @@
- Kopia over DSM configuration is managed by runbook with equivalent CLI commands due to vendor limitation - Kopia over DSM configuration is managed by runbook with equivalent CLI commands due to vendor limitation
- Restore will be processed manually - Restore will be processed manually
- DB data files - DB data files
- From kopia server to console:$HOMELAB_PATH/data/volume/infra/postgresql/\{cluster,data\} - From kopia server to console:$HOMELAB_PATH/data/volume/infra/postgresql/{cluster,data}
- APP data files - APP data files
- From kopia server to APP vm after initiating before deploy services - From kopia server to APP vm after initiating before deploy services
- Automative backup does not guarantee integrity of data system, so before reset the system conduct manual backup after making sure all services are shutdown. - Automatic backup does not guarantee integrity of data system, so before reset the system conduct manual backup after making sure all services are shutdown.
- Check the repository once a week (Every monday)
- Check the snapshot in repository with `kopia snapshot list --all`
- Mount the snapshot respectively with `kopia mount $SNAPSHOT_ID $DESTINATION`
- Copy random file from snapshot and check the values.
- If there's some failure, check the backup service and conduct backup immediately.
- Repeat the check flow.
- When everything is done, umount the kopia mount with `ctrl+c`
## Consequences ## Consequences
- All files including configuration and data back ups will fulfill 3-2-1 \(3 Copies, 2 different media, 1 offsite\) back up rules - All files including configuration and data back ups will fulfill 3-2-1 (3 Copies, 2 different media, 1 offsite) back up rules
+1 -1
View File
@@ -11,7 +11,7 @@
## Context ## Context
- App VM needs GPU for heavy workloads like Immich \(hardware transcoding and machine learning\) - App VM needs GPU for heavy workloads like Immich (hardware transcoding and machine learning)
- App VM needs huge data storage for its own services - App VM needs huge data storage for its own services
## Considerations ## Considerations
+2 -2
View File
@@ -18,7 +18,7 @@
### Hypervisor ### Hypervisor
- As a pure hypervisor, it should only operate virtualization for VM. - As a pure hypervisor, it should only operate virtualization for VM.
- Hypervisor just provides resources and dummy hub \(br\) - Hypervisor just provides resources and dummy hub (br)
### VM ### VM
@@ -30,7 +30,7 @@
### Services ### Services
- Services should be distinguished based on their needs \(Privilege\) - Services should be distinguished based on their needs (Privilege)
- Network stack, backup stack needs special privilege for low level ACL or networks. - Network stack, backup stack needs special privilege for low level ACL or networks.
- application stack doesn't need low level privilege usually - application stack doesn't need low level privilege usually
+1 -1
View File
@@ -27,7 +27,7 @@
- Removing - Removing
- Formatting - Formatting
- Destroying - Destroying
- Certificates and CA \([ADR-003](./003-pki.md)\) - Certificates and CA ([ADR-003](./003-pki.md))
- Etc. what operator decides that is sensitive - Etc. what operator decides that is sensitive
## Consequences ## Consequences
+2 -2
View File
@@ -19,7 +19,7 @@
### Apply mTLS ### Apply mTLS
- implementing mTLS needs both client certificate and server certificate - implementing mTLS needs both client certificate and server certificate
- Managing a number of certificates makes a huge operational burden \(expiry date, revocation, etc ..\) - Managing a number of certificates makes a huge operational burden (expiry date, revocation, etc ..)
## Decisions ## Decisions
@@ -30,4 +30,4 @@
- The policy is set simple - The policy is set simple
- The overhead is increased little - The overhead is increased little
- Exclude the exceptions on operation \(For the administrator\) - Exclude the exceptions on operation (For the administrator)
@@ -13,7 +13,7 @@ PublishPort={{ services['actualbudget']['ports']['http'] }}:5006
Volume=%h/data/containers/actual-budget:/data:rw Volume=%h/data/containers/actual-budget:/data:rw
Environment="TZ=Asia/Seoul" Environment="TZ={{ timezone }}"
Environment="ACTUAL_OPENID_DISCOVERY_URL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}/.well-known/openid-configuration" Environment="ACTUAL_OPENID_DISCOVERY_URL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}/.well-known/openid-configuration"
Environment="ACTUAL_OPENID_CLIENT_ID=actual-budget" Environment="ACTUAL_OPENID_CLIENT_ID=actual-budget"
Environment="ACTUAL_OPENID_SERVER_HOSTNAME=https://{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}" Environment="ACTUAL_OPENID_SERVER_HOSTNAME=https://{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}"
@@ -0,0 +1,26 @@
---
identity_providers:
oidc:
clients:
# https://www.authelia.com/integration/openid-connect/clients/actual-budget/
- client_id: 'actual-budget'
client_name: 'Actual Budget'
client_secret: 'secret'
public: false
authorization_policy: 'one_factor'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://actualbudget.example.com/openid/callback'
scopes:
- 'openid'
- 'profile'
- 'groups'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
@@ -0,0 +1,6 @@
name: crowdsecurity/whitelists
description: "Local whitelist policy"
whitelist:
expression:
# budget local-first sql scrap rule
- "evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_status in ['200', '304'] && evt.Meta.http_verb == 'GET' && evt.Meta.http_path contains '/data/migrations/'"
@@ -0,0 +1,13 @@
---
services:
actualbudget:
domain:
public: ""
internal: ""
ports:
http: ""
subuid: "101000"
version:
containers:
actualbudget: "26.3.0"
@@ -0,0 +1,5 @@
---
actualbudget:
oidc:
secret: ""
hash: ""
@@ -0,0 +1,25 @@
---
identity_providers:
oidc:
clients:
# https://www.authelia.com/integration/openid-connect/clients/ezbookkeeping/
- client_id: 'ezbookkeeping'
client_name: 'ezBookkeeping'
client_secret: 'hash'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://ezbookkeeping.example.com/oauth2/callback'
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
@@ -0,0 +1,61 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=ezBookkeeping
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/mayswind/ezbookkeeping:{{ version['containers']['ezbookkeeping'] }}
ContainerName=ezbookkeeping
HostName=ezbookkeeping
PublishPort={{ services['ezbookkeeping']['ports']['http'] }}:8080/tcp
Volume=%h/data/containers/ezbookkeeping/data:/data:rw
Volume=%h/containers/ezbookkeeping/ssl:/etc/ssl/ezbookkeeping:ro
# General
Environment="TZ={{ timezone }}"
Environment="EBK_SERVER_DOMAIN={{ services['ezbookkeeping']['domain']['public'] }}.{{ domain['public'] }}"
Environment="EBK_SERVER_ROOT_URL=https://{{ services['ezbookkeeping']['domain']['public'] }}.{{ domain['public'] }}/"
Environment="EBK_LOG_MODE=console"
# Database
Environment="EBK_DATABASE_TYPE=postgres"
Environment="EBK_DATABASE_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}"
Environment="EBK_DATABASE_NAME=ezbookkeeping_db"
Environment="EBK_DATABASE_USER=ezbookkeeping"
Secret=EBK_DATABASE_PASSWD,type=env
Environment="EBK_DATABASE_SSL_MODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/ezbookkeeping/{{ root_cert_filename }}"
# OIDC
Environment="EBK_AUTH_ENABLE_OAUTH2_AUTH=true"
Environment="EBK_AUTH_OAUTH2_PROVIDER=oidc"
Environment="EBK_AUTH_OAUTH2_CLIENT_ID=ezbookkeeping"
Secret=EBK_AUTH_OAUTH2_CLIENT_SECRET,type=env
Environment="EBK_AUTH_OAUTH2_USE_PKCE=true"
Environment="EBK_AUTH_OIDC_PROVIDER_BASE_URL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="EBK_AUTH_ENABLE_OIDC_DISPLAY_NAME=true"
Environment="EBK_AUTH_OIDC_CUSTOM_DISPLAY_NAME=Authelia"
# Registration / auth policy
Environment="EBK_AUTH_ENABLE_INTERNAL_AUTH=false"
Environment="EBK_USER_ENABLE_REGISTER=true"
Environment="EBK_AUTH_OAUTH2_AUTO_REGISTER=true"
# AI / MCP disabled by default
Environment="EBK_MCP_ENABLE_MCP=false"
Environment="EBK_LLM_TRANSACTION_FROM_AI_IMAGE_RECOGNITION=false"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,35 @@
# ezBookkeeping
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.ezbookkeeping`
- Access infra server to create paperless_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER ezbookkeeping WITH PASSWORD 'postgresql.password.ezbookkeeping';
CREATE DATABASE ezbookkeeping_db;
ALTER DATABASE ezbookkeeping_db OWNER TO ezbookkeeping;
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'ezbookkeeping.oidc.secret'`
- Save this value in secrets.yaml in `ezbookkeeping.oidc.secret` and `ezbookkeeping.oidc.hash`
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "ezbookkeeping"
```
@@ -0,0 +1,13 @@
---
services:
ezbookkeeping:
domain:
public: ""
internal: ""
ports:
http: ""
subuid: "100999"
version:
containers:
ezbookkeeping: "1.4.0"
@@ -0,0 +1,8 @@
---
postgresql:
password:
ezbookkeeping: ""
ezbookkeeping:
oidc:
secret: ""
hash: ""
@@ -0,0 +1,58 @@
---
- name: Create ezbookkeeping directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['ezbookkeeping']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/ezbookkeeping"
- "data/containers/ezbookkeeping/data"
- "containers/ezbookkeeping"
- "containers/ezbookkeeping/ssl"
become: true
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/ezbookkeeping/ssl/{{ root_cert_filename }}"
owner: "{{ services['ezbookkeeping']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_ezbookkeeping"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "{{ item.name }}"
data: "{{ item.value }}"
state: "present"
force: true
loop:
- name: "EBK_AUTH_OAUTH2_CLIENT_SECRET"
value: "{{ hostvars['console']['ezbookkeeping']['oidc']['secret'] }}"
- name: "EBK_DATABASE_PASSWD"
value: "{{ hostvars['console']['postgresql']['password']['ezbookkeeping'] }}"
notify: "notification_restart_ezbookkeeping"
no_log: true
- name: Deploy ezbookkeeping.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/ezbookkeeping/ezbookkeeping.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/ezbookkeeping.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_ezbookkeeping"
- name: Enable ezbookkeeping.service
ansible.builtin.systemd:
name: "ezbookkeeping.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -0,0 +1,110 @@
---
identity_providers:
oidc:
# For the app which doesn't use secret.
cors:
endpoints:
- 'authorization'
- 'token'
- 'revocation'
- 'introspection'
- 'userinfo'
allowed_origins:
- 'https://opencloud.example.com'
allowed_origins_from_client_redirect_uris: true
clients:
# OpenCloud configuration
## https://docs.opencloud.eu/docs/admin/configuration/authentication-and-user-management/external-idp/
## Web
- client_id: 'opencloud'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://opencloud.example.com/'
- 'https://opencloud.example.com/oidc-callback.html'
- 'https://opencloud.example.com/oidc-silent-redirect.html'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## desktop
- client_id: 'OpenCloudDesktop'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'http://localhost'
- 'http://127.0.0.1'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## Android
- client_id: 'OpenCloudAndroid'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'oc://android.opencloud.eu'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## IOS
- client_id: 'OpenCloudIOS'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'oc://ios.opencloud.eu'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'

Some files were not shown because too many files have changed in this diff Show More