Compare commits

..

48 Commits

Author SHA1 Message Date
il 5dd38b7e49 fix(crowdsec): update whitelist.yaml to prevent false positive
false positive:
- chunk problems (crowdsecurity/http-crawl-non_statics)
- directory upload 404 problem (crowdsecurity/http-probing)
2026-05-02 20:38:48 +09:00
il 33d94211d1 docs(issues): fix crowdsec command 'cscli decision list' to 'cscli decision delete' 2026-05-02 19:46:51 +09:00
il 278dd3cebe feat(nextcloud): release nextcloud
deployment note:
- use nextcloud for groupware
- consider replacing vikunja and opencloud
2026-05-02 19:22:05 +09:00
il d1dcb1984a feat(vaultwarden): update vaultwarden version from 1.35.4 to 1.35.8 2026-04-30 10:03:33 +09:00
il 37c986177b feat(blocky): update blocky version from 0.28.2 to 0.29.0 2026-04-30 10:01:18 +09:00
il 17326b1b15 feat(step-ca): update step-ca version from 0.29.0 to 0.30.2
update note:
- step-ca container doesn't support $PWDPATH anymore
- add --password-file argument to exec
2026-04-30 09:56:22 +09:00
il 88e1383202 feat(x509-exporter): update x509-exporter version from 3.19.1 to 3.21.0 2026-04-30 09:19:42 +09:00
il c9b4707cb2 refactor(x509-exporter): change handler from enable to restart 2026-04-30 09:18:44 +09:00
il da9c610426 feat(caddy): update caddy version from 2.10.2 to 2.11.2
update note:
- https upstream Host rewrite is automated
- Caddyfile already defines Host rewrite explicitly
2026-04-30 09:09:40 +09:00
il c1a6da2aa8 feat(authelia): update authelia version from 4.39.15 to 4.39.19 2026-04-30 09:07:16 +09:00
il f1cd8c9a60 feat(gitea): update gitea version from 1.25.5 to 1.26.1
deployment note:
- stop gitea container
- create manual database backup
- update gitea
2026-04-30 08:28:51 +09:00
il 6010230a14 feat(paperless): update paperless version from 2.20.13 to 2.20.15 2026-04-30 08:10:50 +09:00
il c3d8b62504 feat(opencloud): update opencloud version from 4.0.4 to 4.0.6 2026-04-30 08:03:33 +09:00
il 4a409e37e9 docs(issues): fix service name in timeline 2026-04-28 11:19:50 +09:00
il cb4d17f99e docs(issues): add the past issues which existed before tracking issues
add crowdsec false positive issues

fix the file name of affine android oidc issues
2026-04-27 19:50:04 +09:00
il 9569492e42 docs(issues): add affine android OIDC sign-up failure issue
start tracking service issues on the docs/issues directory
2026-04-20 17:55:26 +09:00
il 2a7b234f4e docs(affine): update flags on affine doc to check blocking guest user 2026-04-20 15:53:27 +09:00
il 621d5310a3 feat(immich): update immich version from 2.7.4 to 2.7.5 2026-04-17 14:16:44 +09:00
il 6377a56d95 refactor(ldap): Add annotation in ldap roles file
the reason why task doesn't use init logic which uses .init file
2026-04-17 14:10:36 +09:00
il dbd72f43a4 refactor(postgresql): update postgresql roles and handler to optimize init check logic 2026-04-17 13:58:22 +09:00
il 9f236b6fa5 refactor(kopia): fix the homepath from hardcoded path to %h the systemd specifier 2026-04-14 07:44:39 +09:00
il b4a0874deb refactor(authelia): fix publish port from hardcoded number to variable 2026-04-14 07:43:12 +09:00
il c51216ff9b refactor(gitea): fix publish port from hardcoded number to varible 2026-04-14 07:42:32 +09:00
il 7debdfcb93 fix(alloy): fix log level parser
- remove parser for JSON and logfmt, and add regex expression to extract the level of log
2026-04-13 10:42:10 +09:00
il da016343c0 feat(alloy): add json parser to categorize log level 2026-04-12 14:09:44 +09:00
il bf749ebbde chore(chromium): delete the roles from the console playbook 2026-04-12 10:58:07 +09:00
il 41d509a49d feat(immich): update immich version from 2.6.3 to 2.7.4
- IMMICH_HELMET_FILE environment can set CSP from v2.7.0
2026-04-12 10:45:59 +09:00
il f062f6862f docs(git): define git convention 2026-04-12 10:31:13 +09:00
il 2dfc0f734e roles, docs: update set_podman.yaml and environments.md to fix typo 2026-04-08 15:21:22 +09:00
il f9211dfa24 inventory: update host_vars/console.yaml to add the hostname of console in local_san to fix sudo speed problem 2026-04-08 14:34:05 +09:00
il 8713631e0b docs: update affine.md to clarify limitation of affine's community edition 2026-04-07 11:34:08 +09:00
il 01ad4350b0 docs: update environments.md to reflect current server status 2026-04-07 00:04:51 +09:00
il 8a4ce488f1 roles: update handlers/main.yaml and set_opencloud.yaml to optimize init check logic in roles 2026-04-06 23:40:06 +09:00
il 664cf2956d 1.9.0 Release affine 2026-04-06 23:33:44 +09:00
il 8c3fe409ae 1.8.2 Update manticore 2026-04-06 20:35:34 +09:00
il 075b796608 config, docs: update whitelists.yaml.j2 and crowdsec.md to add whitelist expression to fix false positive of opencloud chunk problem 2026-04-04 09:59:58 +09:00
il 0b7d1c4d78 1.8.0 Release opencloud 2026-04-04 09:45:48 +09:00
il 017de863d9 inventory, roles: update group_vars/all.yaml and set service files to centralize subuid for containers 2026-04-01 22:22:40 +09:00
il b52a6f6f0d config: update postgresql.conf.j2 to fix port from hardcoded number to ansible variable 2026-04-01 21:54:55 +09:00
il 84d961c7e3 inventory, roles, config, docs: update all files to refactor the ansible variables structure 2026-04-01 21:30:56 +09:00
il d1e0eb30c0 config, docs: update secrets.yaml, vikunja.container.j2, vikunja.md to remove oidc fallback options and local-oidc dual login configuration 2026-03-31 20:31:51 +09:00
il 0f38df0100 config: update fw/nftables.conf.j2 to add the rule that allow connection from console to printer 2026-03-29 13:01:59 +09:00
il 7911657c8c version: update group_vars/all.yaml to update immich from v2.6.2 to v2.6.3 2026-03-28 15:53:03 +09:00
il fd5d0ce4f8 roles: update set_vikunja.yaml to fix the task name error 2026-03-28 13:23:01 +09:00
il 98bc863d08 docs: update environment.md to reflect newest changes on environment 2026-03-28 10:49:52 +09:00
il 9137791aac 1.7.0: Release vikunja 2026-03-28 10:44:18 +09:00
il f9179282b8 roles: update set_immich.yaml to add redis_immich container restart logic 2026-03-28 10:19:41 +09:00
il 25e33caec9 roles, config, docs: update set_paperless.yaml, paperless.container.j2, paperless-ngx.md to add redis_paperless container restart logic and to optimize paperless-ngx configuration 2026-03-25 23:47:50 +09:00
103 changed files with 2162 additions and 391 deletions
+138 -24
View File
@@ -2,66 +2,175 @@
# Global vars
ansible_ssh_private_key_file: "/etc/secrets/{{ hostvars['console']['node']['uid'] }}/id_console"
# URL infromation, you can use {{ infra_uri['services'] | split(':') | first|last }} to seperate domain and ports
infra_uri:
# CA
root_cert_filename: "ilnmors_root_ca.crt"
intermediate_cert_filename: "ilnmors_intermediate_ca.crt"
intermediate_key_filename: "ilnmors_intermediate_ca.key"
# local SAN and SSH SAN should be updated manually on host_vars
domain:
public: "ilnmors.com"
internal: "ilnmors.internal"
dc: "dc=ilnmors,dc=internal"
org: "ilnmors"
# DNS configuration including bind and blocky should be set manually.
# named.conf.j2 is also set manually.
# Check the hosts.j2 when cname records are fixed
services:
crowdsec:
domain: "crowdsec.ilnmors.internal"
domain: "crowdsec"
ports:
https: "8080"
bind:
domain: "bind.ilnmors.internal"
domain: "bind"
ports:
dns: "53"
blocky:
domain: "blocky.ilnmors.internal"
domain: "blocky"
ports:
https: "443"
dns: "53"
postgresql:
domain: "postgresql.ilnmors.internal"
domain: "postgresql"
ports:
tcp: "5432" # postgresql db connection port
subuid: "100998"
ldap:
domain: "ldap.ilnmors.internal"
domain: "ldap"
ports:
http: "17170"
ldaps: "636"
ldaps: "6360"
subuid: "100999"
ca:
domain: "ca.ilnmors.internal"
domain: "ca"
ports:
https: "9000"
subuid: "100999"
x509-exporter:
ports:
http: "9793"
subuid: "165533"
prometheus:
domain: "prometheus.ilnmors.internal"
domain: "prometheus"
ports:
https: "9090"
subuid: "165533"
loki:
domain: "loki.ilnmors.internal"
domain: "loki"
ports:
https: "3100"
subuid: "110000"
grafana:
domain: "grafana"
ports:
http: "3000"
subuid: "100471"
caddy:
ports:
http: "2080"
https: "2443"
nas:
domain: "nas.ilnmors.internal"
domain: "nas"
ports:
https: "5001"
kopia:
domain: "nas.ilnmors.internal"
domain: "nas"
ports:
https: "51515"
authelia:
domain: "authelia"
ports:
http: "9091"
redis:
subuid: "100998"
vaultwarden:
domain:
public: "vault"
internal: "vault.app"
ports:
http: "8000"
gitea:
domain:
public: "gitea"
internal: "gitea.app"
ports:
http: "3000"
subuid: "100999"
immich:
domain:
public: "immich"
internal: "immich.app"
ports:
http: "2283"
redis: "6379"
immich-ml:
ports:
http: "3003"
actualbudget:
domain:
public: "budget"
internal: "budget.app"
ports:
http: "5006"
subuid: "101000"
paperless:
domain:
public: "paperless"
internal: "paperless.app"
ports:
http: "8001"
redis: "6380"
subuid: "100999"
vikunja:
domain:
public: "vikunja"
internal: "vikunja.app"
ports:
http: "3456"
subuid: "100999"
opencloud:
domain:
public: "opencloud"
internal: "opencloud.app"
ports:
http: "9200"
subuid: "100999"
manticore:
subuid: "100998"
affine:
domain:
public: "affine"
internal: "affine.app"
ports:
http: "3010"
redis: "6381"
manticore: "9308"
nextcloud:
domain:
public: "nextcloud"
internal: "nextcloud.app"
ports:
http: "8002"
redis: "6382"
subuid: "100032"
version:
packages:
sops: "3.12.1"
step: "0.29.0"
step: "0.30.2"
kopia: "0.22.3"
blocky: "0.28.2"
blocky: "0.29.0"
alloy: "1.13.0"
# telegraf: "1.37.1"
containers:
# common
caddy: "2.10.2"
caddy: "2.11.2"
# infra
step: "0.29.0"
step: "0.30.2"
ldap: "v0.6.2"
x509-exporter: "3.19.1"
x509-exporter: "3.21.0"
prometheus: "v3.9.1"
loki: "3.6.5"
grafana: "12.3.3"
@@ -71,11 +180,16 @@ version:
# pgvector: "v0.8.1"
vectorchord: "0.5.3"
# Auth
authelia: "4.39.15"
authelia: "4.39.19"
# App
vaultwarden: "1.35.4"
gitea: "1.25.5"
vaultwarden: "1.35.8"
gitea: "1.26.1"
redis: "8.6.1"
immich: "v2.6.2"
immich: "v2.7.5"
actualbudget: "26.3.0"
paperless: "2.20.13"
paperless: "2.20.15"
vikunja: "2.2.2"
opencloud: "4.0.6"
manticore: "25.0.0"
affine: "0.26.3"
nextcloud: "33.0.3"
-4
View File
@@ -39,7 +39,3 @@ storage:
label: "APP_DATA"
level: "raid10"
mount_point: "/home/app/data"
redis:
immich: "6379"
paperless: "6380"
+2 -1
View File
@@ -21,5 +21,6 @@ node:
config_path: "{{ node.homelab_path }}/config"
ssh_san: "console,console.ilnmors.internal"
ssh_users: "vmm,fw,infra,auth,app"
local_san: "localhost console.ilnmors.internal"
# add the hostname of wsl, it is needed to improve the sudo problem
local_san: "localhost console.ilnmors.internal surface"
# ansible_python_interpreter: "{{ ansible_playbook_python }}"
+33
View File
@@ -201,6 +201,39 @@
tags: ["site", "paperless"]
tags: ["site", "paperless"]
- name: Set vikunja
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_vikunja"
apply:
tags: ["site", "vikunja"]
tags: ["site", "vikunja"]
- name: Set opencloud
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_opencloud"
apply:
tags: ["site", "opencloud"]
tags: ["site", "opencloud"]
- name: Set affine
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_affine"
apply:
tags: ["site", "affine"]
tags: ["site", "affine"]
- name: Set nextcloud
ansible.builtin.include_role:
name: "app"
tasks_from: "services/set_nextcloud"
apply:
tags: ["site", "nextcloud"]
tags: ["site", "nextcloud"]
- name: Flush handlers right now
ansible.builtin.meta: "flush_handlers"
+1 -9
View File
@@ -115,18 +115,10 @@
become: true
tags: ["init", "site", "install-packages"]
- name: Install CLI tools
- name: Set CLI tools
ansible.builtin.include_role:
name: "console"
tasks_from: "services/set_cli_tools"
apply:
tags: ["init", "site", "tools"]
tags: ["init", "site", "tools"]
- name: Install chromium with font
ansible.builtin.include_role:
name: "console"
tasks_from: "services/set_chromium"
apply:
tags: ["init", "site", "chromium"]
tags: ["init", "site", "chromium"]
+47
View File
@@ -64,3 +64,50 @@
changed_when: false
listen: "notification_restart_paperless"
ignore_errors: true # noqa: ignore-errors
- name: Restart vikunja
ansible.builtin.systemd:
name: "vikunja.service"
state: "restarted"
enabled: true
scope: "user"
daemon_reload: true
changed_when: false
listen: "notification_restart_vikunja"
ignore_errors: true # noqa: ignore-errors
- name: Restart opencloud
ansible.builtin.systemd:
name: "opencloud.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_opencloud_init.stat.exists
changed_when: false
listen: "notification_restart_opencloud"
ignore_errors: true # noqa: ignore-errors
- name: Restart affine
ansible.builtin.systemd:
name: "affine.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_affine_init.stat.exists
changed_when: false
listen: "notification_restart_affine"
ignore_errors: true # noqa: ignore-errors
- name: Restart nextcloud
ansible.builtin.systemd:
name: "nextcloud.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_nextcloud_init.stat.exists
changed_when: false
listen: "notification_restart_nextcloud"
ignore_errors: true # noqa: ignore-errors
@@ -1,13 +1,9 @@
---
- name: Set actual budget container subuid
ansible.builtin.set_fact:
actualbudget_subuid: "101000"
- name: Create actual budget directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers/actual-budget"
state: "directory"
owner: "{{ actualbudget_subuid }}"
owner: "{{ services['actualbudget']['subuid'] }}"
group: "svadmins"
mode: "0770"
become: true
@@ -0,0 +1,163 @@
---
- name: Set manticore service name
ansible.builtin.set_fact:
manticore_service: "affine"
- name: Create manticore directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['manticore']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/manticore"
- "data/containers/manticore/{{ manticore_service }}"
become: true
- name: Deploy manticore.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/manticore/manticore.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/manticore_{{ manticore_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_manticore_containerfile"
- name: Enable (Restart) manticore.service
ansible.builtin.systemd:
name: "manticore_{{ manticore_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_manticore_containerfile.changed # noqa: no-handler
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "affine"
- name: Create redis_affine directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "containers/redis"
- "containers/redis/{{ redis_service }}"
- "containers/redis/{{ redis_service }}/data"
become: true
- name: Deploy redis config file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.conf.j2"
dest: "{{ node['home_path'] }}/containers/redis/{{ redis_service }}/redis.conf"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/redis_{{ redis_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
name: "redis_{{ redis_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create affine directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/affine"
- "containers/affine"
- "containers/affine/ssl"
- "containers/affine/config"
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/affine/ssl/{{ root_cert_filename }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0440"
notify: "notification_restart_affine"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "{{ item.name }}"
data: "{{ item.value }}"
state: "present"
force: true
loop:
- name: "AFFINE_PRIVATE_KEY"
value: "{{ hostvars['console']['affine']['secret_key'] }}"
- name: "AFFINE_DATABASE_URL"
value: "postgresql://affine:{{ hostvars['console']['postgresql']['password']['affine'] | urlencode | replace('/', '%2F') }}\
@{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}/affine_db?sslmode=verify-full&\
sslrootcert=/etc/ssl/affine/{{ root_cert_filename }}"
notify: "notification_restart_affine"
no_log: true
- name: Check data directory empty
ansible.builtin.stat:
path: "{{ node['home_path'] }}/data/containers/affine/.init"
register: "is_affine_init"
- name: Initialize affine
when: not is_affine_init.stat.exists
block:
- name: Execute init command (Including pulling image)
containers.podman.podman_container:
name: "affine_init"
image: "ghcr.io/toeverything/affine:{{ version['containers']['affine'] }}"
command: ['sh', '-c', 'node ./scripts/self-host-predeploy.js']
state: "started"
rm: true
detach: false
secrets:
- "AFFINE_DATABASE_URL,type=env,target=DATABASE_URL"
no_log: true
- name: Create .init file
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers/affine/.init"
state: "touch"
mode: "0644"
owner: "{{ ansible_user }}"
group: "svadmins"
- name: Deploy affine.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/affine/affine.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/affine.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_affine"
- name: Enable affine.service
ansible.builtin.systemd:
name: "affine.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -1,13 +1,9 @@
---
- name: Set gitea container subuid
ansible.builtin.set_fact:
gitea_subuid: "100999"
- name: Create gitea directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ gitea_subuid }}"
owner: "{{ services['gitea']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -20,8 +16,8 @@
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/gitea/ssl/ilnmors_root_ca.crt"
owner: "{{ gitea_subuid }}"
dest: "{{ node['home_path'] }}/containers/gitea/ssl/{{ root_cert_filename }}"
owner: "{{ services['gitea']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
@@ -2,13 +2,12 @@
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "immich"
redis_subuid: "100998"
- name: Create redis_immich directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ redis_subuid }}"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -24,6 +23,7 @@
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
@@ -32,7 +32,7 @@
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
@@ -41,7 +41,7 @@
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed # noqa: no-handler
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Add user in video, render group
ansible.builtin.user:
@@ -69,7 +69,7 @@
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/immich/ssl/ilnmors_root_ca.crt"
dest: "{{ node['home_path'] }}/containers/immich/ssl/{{ root_cert_filename }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0440"
@@ -0,0 +1,176 @@
---
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "nextcloud"
- name: Create redis_nextcloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "containers/redis"
- "containers/redis/{{ redis_service }}"
- "containers/redis/{{ redis_service }}/data"
become: true
- name: Deploy redis config file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.conf.j2"
dest: "{{ node['home_path'] }}/containers/redis/{{ redis_service }}/redis.conf"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/redis/redis.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/redis_{{ redis_service }}.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
name: "redis_{{ redis_service }}.service"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create nextcloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/nextcloud"
- "data/containers/nextcloud/html"
- "containers/nextcloud"
- "containers/nextcloud/ssl"
- "containers/nextcloud/ini"
become: true
- name: Check data directory empty
ansible.builtin.stat:
path: "{{ node['home_path'] }}/data/containers/nextcloud/.init"
register: "is_nextcloud_init"
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/nextcloud/ssl/{{ root_cert_filename }}"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_nextcloud"
no_log: true
- name: Initialize nextcloud
when: not is_nextcloud_init.stat.exists
block:
- name: Execute init command (Including pulling image)
containers.podman.podman_container:
name: "nextcloud_init"
image: "docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}"
command: "/bin/true"
state: "started"
rm: true
detach: false
env:
NEXTCLOUD_UPDATE: "1"
NEXTCLOUD_ADMIN_USER: "admin-local"
NEXTCLOUD_ADMIN_PASSWORD: "{{ hostvars['console']['nextcloud']['admin-local']['password'] }}"
POSTGRES_HOST: "{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}"
POSTGRES_DB: "nextcloud_db"
POSTGRES_USER: "nextcloud"
POSTGRES_PASSWORD: "{{ hostvars['console']['postgresql']['password']['nextcloud'] }}"
PGSSLMODE: "verify-full"
PGSSLROOTCERT: "/etc/ssl/nextcloud/{{ root_cert_filename }}"
PGSSLCERTMODE: "disable"
REDIS_HOST: "host.containers.internal"
REDIS_HOST_PORT: "{{ services['nextcloud']['ports']['redis'] }}"
volume:
- "{{ node['home_path'] }}/containers/nextcloud/ssl:/etc/ssl/nextcloud:ro"
- "{{ node['home_path'] }}/data/containers/nextcloud/html:/var/www/html:rw"
no_log: true
- name: Create .init file
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers/nextcloud/.init"
state: "touch"
mode: "0644"
owner: "{{ ansible_user }}"
group: "svadmins"
- name: Deploy config files
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/config/{{ item }}.j2"
dest: "{{ node['home_path'] }}/data/containers/nextcloud/html/config/{{ item }}"
owner: "{{ services['nextcloud']['subuid'] }}"
group: "svadmins"
mode: "0640"
loop:
- "background.config.php"
- "cache.config.php"
- "domain.config.php"
- "local_remote.config.php"
- "user_oidc.config.php"
become: true
notify: "notification_restart_nextcloud"
- name: Deploy opcache.ini file
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/ini/{{ item }}"
dest: "{{ node['home_path'] }}/containers/nextcloud/ini/{{ item }}"
group: "svadmins"
mode: "0644"
loop:
- "opcache.ini"
- "upload.ini"
notify: "notification_restart_nextcloud"
- name: Deploy nextcloud.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/nextcloud.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/nextcloud.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_nextcloud"
- name: Deploy nextcloud-cron service
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/nextcloud/systemd/{{ item }}"
dest: "{{ node['home_path'] }}/.config/systemd/user/{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
loop:
- "nextcloud-cron.service"
- "nextcloud-cron.timer"
- name: Enable nextcloud.service
ansible.builtin.systemd:
name: "nextcloud.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
- name: Enable nextcloud-cron.timer
ansible.builtin.systemd:
name: "nextcloud-cron.timer"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -0,0 +1,76 @@
---
- name: Create opencloud directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['opencloud']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/opencloud"
- "containers/opencloud"
become: true
- name: Check data directory empty
ansible.builtin.stat:
path: "{{ node['home_path'] }}/data/containers/opencloud/.init"
become: true
register: "is_opencloud_init"
- name: Initialize opencloud
when: not is_opencloud_init.stat.exists
block:
- name: Execute init command (Including pulling image)
containers.podman.podman_container:
name: "opencloud_init"
image: "docker.io/opencloudeu/opencloud:{{ version['containers']['opencloud'] }}"
command: "init"
state: "started"
rm: true
detach: false
env:
IDM_ADMIN_PASSWORD: "{{ hostvars['console']['opencloud']['admin']['password'] }}"
# Verify the certificate (Opencloud to Authelia, authelia uses let's encrypt.)
OC_INSECURE: "true"
volume:
- "{{ node['home_path'] }}/containers/opencloud:/etc/opencloud:rw"
- "{{ node['home_path'] }}/data/containers/opencloud:/var/lib/opencloud:rw"
no_log: true
- name: Create .init file
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers/opencloud/.init"
state: "touch"
mode: "0644"
owner: "{{ ansible_user }}"
group: "svadmins"
- name: Deploy configuration files
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/opencloud/etc/{{ item }}.j2"
dest: "{{ node['home_path'] }}/containers/opencloud/{{ item }}"
owner: "{{ services['opencloud']['subuid'] }}"
group: "svadmins"
mode: "0640"
loop:
- "csp.yaml"
- "proxy.yaml"
become: true
notify: "notification_restart_opencloud"
- name: Deploy container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/opencloud/opencloud.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/opencloud.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_opencloud"
- name: Enable opencloud.service
ansible.builtin.systemd:
name: "opencloud.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -2,13 +2,12 @@
- name: Set redis service name
ansible.builtin.set_fact:
redis_service: "paperless"
redis_subuid: "100998"
- name: Create redis_paperless directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ redis_subuid }}"
owner: "{{ services['redis']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -24,6 +23,7 @@
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
- name: Deploy redis container file
ansible.builtin.template:
@@ -32,7 +32,7 @@
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
register: "is_redis_conf"
register: "is_redis_containerfile"
- name: Enable (Restart) redis service
ansible.builtin.systemd:
@@ -41,17 +41,13 @@
enabled: true
daemon_reload: true
scope: "user"
when: is_redis_conf.changed # noqa: no-handler
- name: Set paperless subuid
ansible.builtin.set_fact:
paperless_subuid: "100999"
when: is_redis_conf.changed or is_redis_containerfile.changed # noqa: no-handler
- name: Create paperless directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ paperless_subuid }}"
owner: "{{ services['paperless']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -68,8 +64,8 @@
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/paperless/ssl/ilnmors_root_ca.crt"
owner: "{{ paperless_subuid }}"
dest: "{{ node['home_path'] }}/containers/paperless/ssl/{{ root_cert_filename }}"
owner: "{{ services['paperless']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
@@ -100,7 +96,7 @@
"client_id": "paperless",
"secret": "{{ hostvars['console']['paperless']['oidc']['secret'] }}",
"settings": {
"server_url": "https://authelia.ilnmors.com/.well-known/openid-configuration",
"server_url": "https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}/.well-known/openid-configuration",
"token_auth_method": "client_secret_post"
}
}
@@ -15,7 +15,7 @@
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/vaultwarden/ssl/ilnmors_root_ca.crt"
dest: "{{ node['home_path'] }}/containers/vaultwarden/ssl/{{ root_cert_filename }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0440"
@@ -34,7 +34,8 @@
value: "{{ hostvars['console']['vaultwarden']['admin']['hash'] }}"
- name: "VW_DATABASE_URL"
value: "postgresql://vaultwarden:{{ hostvars['console']['postgresql']['password']['vaultwarden'] | urlencode | replace('/', '%2F') }}\
@{{ infra_uri['postgresql']['domain'] }}/vaultwarden_db?sslmode=verify-full&sslrootcert=/etc/ssl/vaultwarden/ilnmors_root_ca.crt"
@{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}/vaultwarden_db?sslmode=verify-full&\
sslrootcert=/etc/ssl/vaultwarden/{{ root_cert_filename }}"
notify: "notification_restart_vaultwarden"
no_log: true
@@ -0,0 +1,58 @@
---
- name: Create vikunja directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/{{ item }}"
state: "directory"
owner: "{{ services['vikunja']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
- "data/containers/vikunja"
- "containers/vikunja"
- "containers/vikunja/ssl"
become: true
- name: Deploy root certificate
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/vikunja/ssl/{{ root_cert_filename }}"
owner: "{{ services['vikunja']['subuid'] }}"
group: "svadmins"
mode: "0440"
become: true
notify: "notification_restart_vikunja"
no_log: true
- name: Register secret value to podman secret
containers.podman.podman_secret:
name: "{{ item.name }}"
data: "{{ item.value }}"
state: "present"
force: true
loop:
- name: "VIKUNJA_SERVICE_JWTSECRET"
value: "{{ hostvars['console']['vikunja']['session_secret'] }}"
- name: "VIKUNJA_DATABASE_PASSWORD"
value: "{{ hostvars['console']['postgresql']['password']['vikunja'] }}"
- name: "VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_CLIENTSECRET"
value: "{{ hostvars['console']['vikunja']['oidc']['secret'] }}"
notify: "notification_restart_vikunja"
no_log: true
- name: Deploy vikunja.container file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/app/vikunja/vikunja.container.j2"
dest: "{{ node['home_path'] }}/.config/containers/systemd/vikunja.container"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0644"
notify: "notification_restart_vikunja"
- name: Enable vikunja.service
ansible.builtin.systemd:
name: "vikunja.service"
state: "started"
enabled: true
daemon_reload: true
scope: "user"
@@ -27,7 +27,7 @@
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/authelia/certs/ilnmors_root_ca.crt"
dest: "{{ node['home_path'] }}/containers/authelia/certs/{{ root_cert_filename }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0440"
@@ -2,7 +2,7 @@
- name: Deploy root_ca.crt
ansible.builtin.copy:
content: "{{ hostvars['console']['ca']['root']['crt'] }}"
dest: "/usr/local/share/ca-certificates/ilnmors_root_ca.crt"
dest: "/usr/local/share/ca-certificates/{{ root_cert_filename }}"
owner: "root"
group: "root"
mode: "0644"
@@ -54,7 +54,7 @@
- name: Deploy root crt for build
ansible.builtin.copy:
content: "{{ hostvars['console']['ca']['root']['crt'] }}"
dest: "{{ node['home_path'] }}/containers/caddy/build/ilnmors_root_ca.crt"
dest: "{{ node['home_path'] }}/containers/caddy/build/{{ root_cert_filename }}"
owner: "{{ ansible_user }}"
group: "svadmins"
mode: "0640"
@@ -62,7 +62,7 @@
- name: Build caddy container image
containers.podman.podman_image:
name: "ilnmors.internal/{{ node['name'] }}/caddy"
name: "{{ domain['internal'] }}/{{ node['name'] }}/caddy"
# check tags from container file
tag: "{{ version['containers']['caddy'] }}"
state: "build"
@@ -37,9 +37,9 @@
KOPIA_PASSWORD: "{{ hostvars['console']['kopia']['user']['console'] }}"
ansible.builtin.shell: |
/usr/bin/kopia repository connect server \
--url=https://{{ infra_uri['kopia']['domain'] }}:{{ infra_uri['kopia']['ports']['https'] }} \
--url=https://{{ services['kopia']['domain'] }}.{{ domain['internal'] }}:{{ services['kopia']['ports']['https'] }} \
--override-username=console \
--override-hostname=console.ilnmors.internal
--override-hostname=console.{{ domain['internal'] }}
changed_when: false
failed_when: is_kopia_connected.rc != 0
register: "is_kopia_connected"
@@ -15,7 +15,7 @@
state: "directory"
mode: "0700"
- name: Create contaienr data directory for app
- name: Create container data directory for app
ansible.builtin.file:
path: "{{ node['home_path'] }}/data/containers"
owner: "{{ ansible_user }}"
@@ -23,7 +23,7 @@
become: true
ansible.builtin.copy:
content: |
@cert-authority *.ilnmors.internal {{ hostvars['console']['ssh']['ca']['pub'] }}
@cert-authority *.{{ domain['internal'] }} {{ hostvars['console']['ssh']['ca']['pub'] }}
dest: "/etc/ssh/ssh_known_hosts"
owner: "root"
group: "root"
@@ -21,8 +21,8 @@
become: true
- name: Deploy ddns service files
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/fw/ddns/{{ item }}"
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/systemd/fw/ddns/{{ item }}.j2"
dest: "{{ node['home_path'] }}/.config/systemd/user/{{ item }}"
owner: "{{ ansible_user }}"
group: "svadmins"
+4 -4
View File
@@ -12,7 +12,7 @@
- name: Reload postgresql
ansible.builtin.command:
/usr/bin/podman exec -u postgres postgresql sh -c "pg_ctl reload"
when: not (is_postgresql_init_run | default(false))
when: is_postgresql_init.stat.exists
changed_when: false
listen: "notification_reload_postgresql"
ignore_errors: true # noqa: ignore-errors
@@ -24,7 +24,7 @@
enabled: true
daemon_reload: true
scope: "user"
when: not (is_postgresql_init_run | default(false))
when: is_postgresql_init.stat.exists
changed_when: false
listen: "notification_restart_postgresql"
ignore_errors: true # noqa: ignore-errors
@@ -73,10 +73,10 @@
listen: "notification_restart_grafana"
ignore_errors: true # noqa: ignore-errors
- name: Enable x509-exporter.service
- name: Restart x509-exporter.service
ansible.builtin.systemd:
name: "x509-exporter.service"
state: "started"
state: "restarted"
enabled: true
daemon_reload: true
scope: "user"
@@ -1,12 +1,8 @@
---
- name: Set ca container subuid
ansible.builtin.set_fact:
ca_subuid: "100999"
- name: Create ca directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
owner: "{{ ca_subuid }}"
owner: "{{ services['ca']['subuid'] }}"
group: "svadmins"
state: "directory"
mode: "0770"
@@ -32,7 +28,7 @@
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/ca/config/{{ item }}.j2"
dest: "{{ node['home_path'] }}/containers/ca/config/{{ item }}"
owner: "{{ ca_subuid }}"
owner: "{{ services['ca']['subuid'] }}"
group: "svadmins"
mode: "0400"
loop:
@@ -46,19 +42,19 @@
content: |
{{ item.value }}
dest: "{{ item.path }}/{{ item.name }}"
owner: "{{ ca_subuid }}"
owner: "{{ services['ca']['subuid'] }}"
group: "svadmins"
mode: "{{ item.mode }}"
loop:
- name: "ilnmors_root_ca.crt"
- name: "{{ root_cert_filename }}"
value: "{{ hostvars['console']['ca']['root']['crt'] }}"
path: "{{ node['home_path'] }}/containers/ca/certs"
mode: "0440"
- name: "ilnmors_intermediate_ca.crt"
- name: "{{ intermediate_cert_filename }}"
value: "{{ hostvars['console']['ca']['intermediate']['crt'] }}"
path: "{{ node['home_path'] }}/containers/ca/certs"
mode: "0440"
- name: "ilnmors_intermediate_ca.key"
- name: "{{ intermediate_key_filename }}"
value: "{{ hostvars['console']['ca']['intermediate']['key'] }}"
path: "{{ node['home_path'] }}/containers/ca/secrets"
mode: "0400"
@@ -1,12 +1,8 @@
---
- name: Set grafana container subuid
ansible.builtin.set_fact:
grafana_subuid: "100471"
- name: Create grafana directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
owner: "{{ grafana_subuid }}"
owner: "{{ services['grafana']['subuid'] }}"
group: "svadmins"
state: "directory"
mode: "0770"
@@ -23,8 +19,8 @@
ansible.builtin.copy:
content: |
{{ hostvars['console']['ca']['root']['crt'] }}
dest: "{{ node['home_path'] }}/containers/grafana/ssl/ilnmors_root_ca.crt"
owner: "{{ grafana_subuid }}"
dest: "{{ node['home_path'] }}/containers/grafana/ssl/{{ root_cert_filename }}"
owner: "{{ services['grafana']['subuid'] }}"
group: "svadmins"
mode: "0400"
become: true
@@ -51,7 +47,7 @@
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/grafana/etc/{{ item }}.j2"
dest: "{{ node['home_path'] }}/containers/grafana/etc/{{ item }}"
owner: "{{ grafana_subuid }}"
owner: "{{ services['grafana']['subuid'] }}"
group: "svadmins"
mode: "0400"
loop:
@@ -61,11 +57,11 @@
notify: "notification_restart_grafana"
no_log: true
- name: Deploy provisioing and dashboard files
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/grafana/etc/provisioning/"
dest: "{{ node['home_path'] }}/containers/grafana/etc/provisioning/"
owner: "{{ grafana_subuid }}"
- name: Deploy provisioing file
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/grafana/etc/provisioning/datasources/datasources.yaml.j2"
dest: "{{ node['home_path'] }}/containers/grafana/etc/provisioning/datasources/datasources.yaml"
owner: "{{ services['grafana']['subuid'] }}"
group: "svadmins"
mode: "0400"
become: true
@@ -1,12 +1,8 @@
---
- name: Set ldap container subuid
ansible.builtin.set_fact:
ldap_subuid: "100999"
- name: Create ldap directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
owner: "{{ ldap_subuid }}"
owner: "{{ services['ldap']['subuid'] }}"
group: "svadmins"
state: "directory"
mode: "0770"
@@ -21,11 +17,11 @@
content: |
{{ item.value }}
dest: "{{ node['home_path'] }}/containers/ldap/ssl/{{ item.name }}"
owner: "{{ ldap_subuid }}"
owner: "{{ services['ldap']['subuid'] }}"
group: "svadmins"
mode: "{{ item.mode }}"
loop:
- name: "ilnmors_root_ca.crt"
- name: "{{ root_cert_filename }}"
value: "{{ hostvars['console']['ca']['root']['crt'] }}"
mode: "0440"
- name: "ldap.crt"
@@ -50,7 +46,7 @@
# urlencode doesn't fix `/` as `%2F`. It needs replace
- name: "LLDAP_DATABASE_URL"
value: "postgres://ldap:{{ hostvars['console']['postgresql']['password']['ldap'] | urlencode | replace('/', '%2F') }}\
@{{ infra_uri['postgresql']['domain'] }}/ldap_db?sslmode=verify-full&sslrootcert=/etc/ssl/ldap/ilnmors_root_ca.crt"
@{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}/ldap_db?sslmode=verify-full&sslrootcert=/etc/ssl/ldap/{{ root_cert_filename }}"
- name: "LLDAP_KEY_SEED"
value: "{{ hostvars['console']['ldap']['seed_key'] }}"
- name: "LLDAP_JWT_SECRET"
@@ -59,6 +55,8 @@
no_log: true
- name: Initiate ldap (When = false, If DB data does not exist in postgresql, activate this block)
# The reason why this task doesn't use the way to check ".init" file is this tasks can override original database.
# Absent of ".init" file cannot guarantee DB is empty.
when: false
become: true
block:
@@ -78,7 +76,7 @@
detach: false
env:
TZ: "Asia/Seoul"
LLDAP_LDAP_BASE_DN: "dc=ilnmors,dc=internal"
LLDAP_LDAP_BASE_DN: "{{ domain['dc'] }}"
secrets:
- "LLDAP_DATABASE_URL,type=env"
- "LLDAP_KEY_SEED,type=env"
@@ -1,13 +1,9 @@
---
- name: Set loki container subuid
ansible.builtin.set_fact:
loki_subuid: "110000" # 10001
- name: Create loki directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
state: "directory"
owner: "{{ loki_subuid }}"
owner: "{{ services['loki']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -18,10 +14,10 @@
become: true
- name: Deploy loki configuration file
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/loki/etc/loki.yaml"
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/loki/etc/loki.yaml.j2"
dest: "{{ node['home_path'] }}/containers/loki/etc/loki.yaml"
owner: "{{ loki_subuid }}"
owner: "{{ services['loki']['subuid'] }}"
group: "svadmins"
mode: "0600"
become: true
@@ -33,11 +29,11 @@
content: |
{{ item.value }}
dest: "{{ node['home_path'] }}/containers/loki/ssl/{{ item.name }}"
owner: "{{ loki_subuid }}"
owner: "{{ services['loki']['subuid'] }}"
group: "svadmins"
mode: "{{ item.mode }}"
loop:
- name: "ilnmors_root_ca.crt"
- name: "{{ root_cert_filename }}"
value: "{{ hostvars['console']['ca']['root']['crt'] }}"
mode: "0440"
- name: "loki.crt"
@@ -1,8 +1,4 @@
---
- name: Set postgresql container subuid
ansible.builtin.set_fact:
postgresql_subuid: "100998"
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
@@ -13,12 +9,15 @@
- "gitea"
- "immich"
- "paperless"
- "vikunja"
- "affine"
- "nextcloud"
- name: Create postgresql directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
state: "directory"
owner: "{{ postgresql_subuid }}"
owner: "{{ services['postgresql']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -41,7 +40,7 @@
- name: Build postgresql container image
containers.podman.podman_image:
name: "ilnmors.internal/{{ node['name'] }}/postgres"
name: "{{ domain['internal'] }}/{{ node['name'] }}/postgres"
# check tags from container file
tag: "pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}"
state: "build"
@@ -55,7 +54,7 @@
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/postgresql/config/{{ item }}.j2"
dest: "{{ node['home_path'] }}/containers/postgresql/config/{{ item }}"
owner: "{{ postgresql_subuid }}"
owner: "{{ services['postgresql']['subuid'] }}"
group: "svadmins"
mode: "0600"
loop:
@@ -70,11 +69,11 @@
content: |
{{ item.value }}
dest: "{{ node['home_path'] }}/containers/postgresql/ssl/{{ item.name }}"
owner: "{{ postgresql_subuid }}"
owner: "{{ services['postgresql']['subuid'] }}"
group: "svadmins"
mode: "{{ item.mode }}"
loop:
- name: "ilnmors_root_ca.crt"
- name: "{{ root_cert_filename }}"
value: "{{ hostvars['console']['ca']['root']['crt'] }}"
mode: "0440"
- name: "postgresql.crt"
@@ -90,15 +89,13 @@
no_log: true
- name: Check data directory empty
ansible.builtin.find:
paths: "{{ node['home_path'] }}/containers/postgresql/data/"
hidden: true
file_type: "any"
ansible.builtin.stat:
path: "{{ node['home_path'] }}/containers/postgresql/data/.init"
become: true
register: "is_data_dir_empty"
register: "is_postgresql_init"
- name: Prepare initiating DB
when: is_data_dir_empty.matched == 0
when: not is_postgresql_init.stat.exists
become: true
block:
# `init/pg_cluster.sql` should be fetched from postgresql's backup directory before running initiating
@@ -106,7 +103,7 @@
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/postgresql/init/pg_cluster.sql"
dest: "{{ node['home_path'] }}/containers/postgresql/init/0_pg_cluster.sql"
owner: "{{ postgresql_subuid }}"
owner: "{{ services['postgresql']['subuid'] }}"
group: "svadmins"
mode: "0600"
@@ -114,15 +111,20 @@
ansible.builtin.copy:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/postgresql/init/pg_{{ item }}.sql"
dest: "{{ node['home_path'] }}/containers/postgresql/init/{{ index_num + 1 }}_pg_{{ item }}.sql"
owner: "{{ postgresql_subuid }}"
owner: "{{ services['postgresql']['subuid'] }}"
group: "svadmins"
mode: "0600"
loop: "{{ connected_services }}"
loop_control:
index_var: index_num
- name: Set is_postgresql_init_run
ansible.builtin.set_fact:
is_postgresql_init_run: true
- name: Create .init file
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/postgresql/data/.init"
state: "touch"
mode: "0644"
owner: "{{ ansible_user }}"
group: "svadmins"
- name: Deploy container file
ansible.builtin.template:
@@ -1,13 +1,9 @@
---
- name: Set prometheus container subuid
ansible.builtin.set_fact:
prometheus_subuid: "165533" # nobody - 65534
- name: Create prometheus directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
state: "directory"
owner: "{{ prometheus_subuid }}"
owner: "{{ services['prometheus']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -21,7 +17,7 @@
ansible.builtin.template:
src: "{{ hostvars['console']['node']['config_path'] }}/services/containers/infra/prometheus/etc/{{ item }}.j2"
dest: "{{ node['home_path'] }}/containers/prometheus/etc/{{ item }}"
owner: "{{ prometheus_subuid }}"
owner: "{{ services['prometheus']['subuid'] }}"
group: "svadmins"
mode: "0600"
loop:
@@ -37,11 +33,11 @@
content: |
{{ item.value }}
dest: "{{ node['home_path'] }}/containers/prometheus/ssl/{{ item.name }}"
owner: "{{ prometheus_subuid }}"
owner: "{{ services['prometheus']['subuid'] }}"
group: "svadmins"
mode: "{{ item.mode }}"
loop:
- name: "ilnmors_root_ca.crt"
- name: "{{ root_cert_filename }}"
value: "{{ hostvars['console']['ca']['root']['crt'] }}"
mode: "0440"
- name: "prometheus.crt"
@@ -1,13 +1,9 @@
---
- name: Set x509-exporter container subuid
ansible.builtin.set_fact:
x509_exporter_subuid: "165533" # nobody - 65534
- name: Create x509-exporter directory
ansible.builtin.file:
path: "{{ node['home_path'] }}/containers/{{ item }}"
state: "directory"
owner: "{{ x509_exporter_subuid }}"
owner: "{{ services['x509-exporter']['subuid'] }}"
group: "svadmins"
mode: "0770"
loop:
@@ -20,7 +16,7 @@
content: |
{{ item.value }}
dest: "{{ node['home_path'] }}/containers/x509-exporter/certs/{{ item.name }}"
owner: "{{ x509_exporter_subuid }}"
owner: "{{ services['x509-exporter']['subuid'] }}"
group: "svadmins"
mode: "0440"
loop:
+20 -20
View File
@@ -3,32 +3,32 @@
::1 {{ node['local_san'] }}
{% if node['name'] == 'console' %}
# Hosts IPv4
{{ hostvars['fw']['network4']['firewall']['server'] }} fw.ilnmors.internal
{{ hostvars['fw']['network4']['vmm']['client'] }} init.vmm.ilnmors.internal
{{ hostvars['fw']['network4']['vmm']['server'] }} vmm.ilnmors.internal
{{ hostvars['fw']['network4']['infra']['server'] }} infra.ilnmors.internal
{{ hostvars['fw']['network4']['auth']['server'] }} auth.ilnmors.internal
{{ hostvars['fw']['network4']['app']['server'] }} app.ilnmors.internal
{{ hostvars['fw']['network4']['firewall']['server'] }} fw.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['vmm']['client'] }} init.vmm.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['vmm']['server'] }} vmm.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['infra']['server'] }} infra.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['auth']['server'] }} auth.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['app']['server'] }} app.{{ domain['internal'] }}
# Hosts IPv6
{{ hostvars['fw']['network6']['firewall']['server'] }} fw.ilnmors.internal
{{ hostvars['fw']['network6']['vmm']['client'] }} init.vmm.ilnmors.internal
{{ hostvars['fw']['network6']['vmm']['server'] }} vmm.ilnmors.internal
{{ hostvars['fw']['network6']['infra']['server'] }} infra.ilnmors.internal
{{ hostvars['fw']['network6']['auth']['server'] }} auth.ilnmors.internal
{{ hostvars['fw']['network6']['app']['server'] }} app.ilnmors.internal
{{ hostvars['fw']['network6']['firewall']['server'] }} fw.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['vmm']['client'] }} init.vmm.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['vmm']['server'] }} vmm.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['infra']['server'] }} infra.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['auth']['server'] }} auth.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['app']['server'] }} app.{{ domain['internal'] }}
{% else %}
# IPv4
# Crowdsec, blocky, bind(fw)
{{ hostvars['fw']['network4']['firewall']['server'] }} ntp.ilnmors.internal crowdsec.ilnmors.internal
{{ hostvars['fw']['network4']['blocky']['server'] }} blocky.ilnmors.internal
{{ hostvars['fw']['network4']['bind']['server'] }} bind.ilnmors.internal
{{ hostvars['fw']['network4']['firewall']['server'] }} ntp.{{ domain['internal'] }} crowdsec.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['blocky']['server'] }} blocky.{{ domain['internal'] }}
{{ hostvars['fw']['network4']['bind']['server'] }} bind.{{ domain['internal'] }}
# DB, LDAP, CA, Prometheus, Loki, mail (infra)
{{ hostvars['fw']['network4']['infra']['server'] }} postgresql.ilnmors.internal ldap.ilnmors.internal prometheus.ilnmors.internal loki.ilnmors.internal mail.ilnmors.internal ca.ilnmors.internal
{{ hostvars['fw']['network4']['infra']['server'] }} postgresql.{{ domain['internal'] }} ldap.{{ domain['internal'] }} prometheus.{{ domain['internal'] }} loki.{{ domain['internal'] }} mail.{{ domain['internal'] }} ca.{{ domain['internal'] }}
# IPv6
# Crowdsec, blocky, bind(fw)
{{ hostvars['fw']['network6']['firewall']['server'] }} ntp.ilnmors.internal crowdsec.ilnmors.internal
{{ hostvars['fw']['network6']['blocky']['server'] }} blocky.ilnmors.internal
{{ hostvars['fw']['network6']['bind']['server'] }} bind.ilnmors.internal
{{ hostvars['fw']['network6']['firewall']['server'] }} ntp.{{ domain['internal'] }} crowdsec.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['blocky']['server'] }} blocky.{{ domain['internal'] }}
{{ hostvars['fw']['network6']['bind']['server'] }} bind.{{ domain['internal'] }}
# DB, LDAP, CA, Prometheus, Loki, mail (infra)
{{ hostvars['fw']['network6']['infra']['server'] }} postgresql.ilnmors.internal ldap.ilnmors.internal prometheus.ilnmors.internal loki.ilnmors.internal mail.ilnmors.internal ca.ilnmors.internal
{{ hostvars['fw']['network6']['infra']['server'] }} postgresql.{{ domain['internal'] }} ldap.{{ domain['internal'] }} prometheus.{{ domain['internal'] }} loki.{{ domain['internal'] }} mail.{{ domain['internal'] }} ca.{{ domain['internal'] }}
{% endif %}
+1 -1
View File
@@ -1,3 +1,3 @@
[Time]
NTP=ntp.ilnmors.internal
NTP=ntp.{{ domain['internal'] }}
FallbackNTP=0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org
+3
View File
@@ -30,6 +30,7 @@ define HOSTS4_INFRA = {{ hostvars['fw']['network4']['infra']['server'] }}
define HOSTS4_AUTH = {{ hostvars['fw']['network4']['auth']['server'] }}
define HOSTS4_APP = {{ hostvars['fw']['network4']['app']['server'] }}
define HOSTS4_NAS = {{ hostvars['fw']['network4']['nas']['client'] }}
define HOSTS4_PRINTER = {{ hostvars['fw']['network4']['printer']['client'] }}
define HOSTS6_FW = { {{ hostvars['fw']['network6']['firewall'].values() | join(', ') }} }
define HOSTS6_BLOCKY = {{ hostvars['fw']['network6']['blocky']['server'] }}
@@ -146,6 +147,8 @@ table inet filter {
# Kopia/NAS Console > NAS
oifname $IF_CLIENT ip saddr $HOSTS4_CONSOLE ip daddr $HOSTS4_NAS tcp dport { $PORTS_NAS, $PORTS_KOPIA } accept comment "allow ipv4 web connection (DSM, KOPIA): CONSOLE > FW > CLIENT NAS"
oifname $IF_CLIENT ip6 saddr $HOSTS6_CONSOLE ip6 daddr $HOSTS6_NAS tcp dport { $PORTS_NAS, $PORTS_KOPIA } accept comment "allow ipv6 web connection (DSM, KOPIA): CONSOLE > FW > CLIENT NAS"
# Printer
oifname $IF_CLIENT ip saddr $HOSTS4_CONSOLE ip daddr $HOSTS4_PRINTER accept comment "allow ipv4 printer connection: CONSOLE > FW > PRINTER"
iifname $IF_WAN jump wan comment "set WAN interface rules"
iifname $IF_CLIENT jump client comment "set CLIENT interface rules"
+43 -3
View File
@@ -117,6 +117,9 @@ postgresql:
gitea: ENC[AES256_GCM,data:l+pBCzyQa3000SE9z1R4htD0V0ONsBtKy92dfgsVYsZ3XlEyVJDIBOsugwM=,iv:5t/oHW1vFAmV/s2Ze/cV9Vuqo96Qu6QvZeRbio7VX2s=,tag:4zeQaXiXIzBpy+tXsxmN7Q==,type:str]
immich: ENC[AES256_GCM,data:11jvxTKA/RL0DGL6y2/X092hnDohj6yTrYGK4IVojqBd1gCOBnDvUjgmx14=,iv:oBfHxsx9nxhyKY/WOuWfybxEX2bf+lHEtsaifFRS9lg=,tag:tAfkBdgQ8ZEkLIFcDICKDw==,type:str]
paperless: ENC[AES256_GCM,data:6VBrBbjVoam7SkZCSvoBTdrfkUoDghdGTiBmFLul04X/okXOHeC5zusJffY=,iv:iZumcJ3TWwZD77FzYx8THwCqC+EbnXUBrEKuPh3zgV8=,tag:u2m8SppAdxZ/duNdpuS3oQ==,type:str]
vikunja: ENC[AES256_GCM,data:/+wQdoFPTBG2elI9kZbAVWrHZ0DhMaYr4dc+2z9QNdb3TcDS2PEia0JuSAg=,iv:MViZTyUD8YqMmxSTWCQpJ30f/KQdQGOzPlRHHsQ8lAw=,tag:zov3POno139dkMxFDpj2gg==,type:str]
affine: ENC[AES256_GCM,data:XPXrcszsV06YqCJZ7CDqc4rCwqqNlbtLCFYfLAQ8jamLtft8L2UVrMA4WZo=,iv:vrWdBeckxB9tmEE628j4jhU+hSpE6TXYMGt0hh1Cg84=,tag:hlWwWUGht8NqWTZREMsa1Q==,type:str]
nextcloud: ENC[AES256_GCM,data:ROsximNuWYMTZktmLJPx7W1Qol/uT+APgwoCtFO/6ZYYc3KxKvlk344eqEc=,iv:4d+MrfIHjJKAcwhvZ3g4go66uZcieuL7lngKErJd+fg=,tag:QbWOtxeCbiu62GyrE2atXg==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
@@ -135,7 +138,7 @@ ldap:
authelia: ENC[AES256_GCM,data:G8ZGsLKqEmMzQ5NMAgirF5BQraHNqixtI6dyyaeNhTdXebjJZML52xL36p4=,iv:ZtHAsFYmrQxr+qoQLPW/eme0+nsT148KRsXmW/LNLlU=,tag:Pvjs/eylkgxJpmGBsRmjcw==,type:str]
grafana: ENC[AES256_GCM,data:vWmU3ZKcolETWAY74C3OMD8gMXDeYk+DqssACL0xefIPi5IkbrhYWmnWAnA=,iv:wcRms3Zp8kPM4USRPVa0UHpCTK36SWhK9C8yHSWu2Cs=,tag:gU5S/6fdMZVd/ih3Yd5uJA==,type:str]
il: ENC[AES256_GCM,data:/CyMeo1+rIUAYiB25nI0,iv:jsyiiRN5z9GqcUnTZ0CZo4s+umTc2zeY2FPp+tVOC9o=,tag:cwOHcqMysCxX57w3a+Pzpg==,type:str]
morsalin: null
morsalin: ENC[AES256_GCM,data:YryNch8hF6rx,iv:bNIBur3Jcib8BvKjJ0MejpemsurYTP8rCxo6b2R5yEo=,tag:9dIIgqEPtbeixtgJ1OtMnQ==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
@@ -226,6 +229,43 @@ paperless:
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:V7DJHA2JQirfBsrCGhXrhg==,iv:+jYqX9hGNnuyYj9o9LpCYFVOoD6nSrtc4t40Ag0mMzo=,tag:1wSxKtkJm42reUxdwYDvlg==,type:comment]
vikunja:
session_secret: ENC[AES256_GCM,data:CMyw8JGHyTczGsrOJJwQBKfXMU4Sudvwkur1Lgx4o64=,iv:F2VmpqddiDT4jGaGDKGl6FARsQOt3lLz3X6TjC2MIVU=,tag:UJYyzrl/FX1BNwY4ROFncA==,type:str]
oidc:
secret: ENC[AES256_GCM,data:QwqndYsfr+fh9OLkHYtLYCa6WUdhnL7A4btz1d1eelTwq3Kps5S6BUN5qZg=,iv:51N8byIAAUh4ky7YBAuEJOBEWu1d9AX5W1m37/cLlCM=,tag:GD7jbxNGd748TCPgqsxyMg==,type:str]
hash: ENC[AES256_GCM,data:ORifyT4u1V2CyBCNBgF72wwS2i05mlzA4iIVEa1cH9aaE69PdiQvGGzMHK+tmlfpVaVQEENSt1QDUSSlMyeuZT/3a0JwAvlz+XDbpS7bicL2cB6DCa4JyEd/rbGRXs0/COfxPxXzYv7jq9gd2uSJ+cCGYb/93WuEXSEI6PHi+FF7N94=,iv:FVSGySa4YB2vwenqSagBzxeIexg91ewvcQMix+etmng=,tag:yyQtOgzOZypba+rV3A1K9g==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:EsRGZP7snPchEAMoQN5PoQpiOA==,iv:A/8POGq3pIw7aX5S2vyKtI2vPqH0FT6yZnpe/vVbifw=,tag:BgUYHX2zxIL7yLS0JbI1Yg==,type:comment]
opencloud:
admin:
password: ENC[AES256_GCM,data:VKG7sNTTLHCXRGf4SAlR91+hvc7PaNrnpJX/4kItVcT9W1Hdl/yKgHHD7M8=,iv:WwWnx9KuN+i/Ugwv+HY4IGDZrLHk71hsobGFOn9kml0=,tag:SS6ihrtZjLnlAJR59lw+gw==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:k55osvepVeB1RC5hZ4IF,iv:AlhfmWwn/DiSESWc+ULJSOLUhnrKAIfWr7MeiwV8qc8=,tag:hOgptwUcY6nVxPIhu+DYgw==,type:comment]
affine:
secret_key: ENC[AES256_GCM,data:LLX78DpYnha1JWhgw0sHLzIVq/oIzvT+nB7zgli4mroGbnt7WZaXCx34zKkYRwYj/+0L4IFFVdkzKtK5DO84SgFkS2Bk2iNdCMqIx80CpyiD8IWAcyRu5d6hh82PlgyxU80T/4nbLbIn0GLubPTTeUX8GC3VxRU=,iv:DnmvbhlygSHes0jAkIm4+WXMUQLzr4R4dNa33rO67v8=,tag:+2wlh+/ekiTyShWM4XBbUw==,type:str]
il:
password: ENC[AES256_GCM,data:4zxiQAzXTR+fraRjYT657BIwSqrih3lMPFFSibQdardRMjskAbuRYIQA6mo=,iv:ub3giRG9vCFSuwRXDazYTqWbjENzQUWR36290Kruj1o=,tag:C2Ixd2eTEgzBvUNCNBtJuA==,type:str]
oidc:
secret: ENC[AES256_GCM,data:eRDBrqLZR7MFLlsUwk7Wg7FzxDov7vJLIWQRuKq7vrXbPSJkMcy9jfG2rL4=,iv:UaSoi7gODXgjzihJIDVIdDHJcSAZNV8UKfGeM6YzxqI=,tag:cOUDblcMStP8E4fp+s1WRQ==,type:str]
hash: ENC[AES256_GCM,data:jE1CvFo+mjb/Xc3Ft5ky7on03vcnv79cw/5g/xaldXsv94VRrIjmfGMgHAj07r8j5mDpP34A5bYO1PSe9DYrwRcsXa9OUQuzm/8avFy9wVZDhBUUAGR+jiW1BP9hc6nmSpPVPtle+3sbqOB0ZMjXWwlcAcuknOtuhH1mzwmaDP9yf+M=,iv:CSSaXY/6MpHBMhPLUWPkabIeJ9zpZkcVjiEhxVF0zJM=,tag:f72ekkjJs7Qmh1K9wC8L9w==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:PZS7EbvMHqHGorNUGAWj4dk1,iv:vOE+djRAvBTMM51kHi6kG5Arw3uPXlJt1d/BpcEaD0c=,tag:AuoCHLQz42CYvVVdKFWu1Q==,type:comment]
nextcloud:
admin-local:
password: ENC[AES256_GCM,data:mIwF5A09oqYbdK3bOKid9A896Q5J5Q6Ax+vDNqEJFGNdzd/mJ4oQS6rva+s=,iv:QroUMST2wnEJzk6DySe9tPZaWuqdxzJZ0+oi6mW6x00=,tag:3UTzjupK7+omrI3Hvyr8bA==,type:str]
oidc:
secret: ENC[AES256_GCM,data:Sr4KkKkYdkU0UWdpfUF7PyiGoerjBiw+sOFcENyLxw0FRXGG0Y8gv5uGb4Q=,iv:LbGsNM3+iY7bWFQe88TepVKUdiRQWZ+K7Ubn6ze6lV4=,tag:SbcfIAMW9ZprgahOFU4IQQ==,type:str]
hash: ENC[AES256_GCM,data:CkstbIYQmi72QhsbJZN0lQedgCn7TmGpYcYj0n+NvJIoTlol8G9N/88cwGbVoGK9nEISv54FL94cEJFppnMIuj0BHrhasrZsyI2/Lj52YLWdwNJWNQ+iYt+Ifp/1kI0zqmdoajzZ5DS2w/1evCBC1+JdfTRlpVXmSsHUIPIHelBRj90=,iv:vwvT5TTkF4woxXOvrRRqmrdLXf19s47NIDtdT+zLp0U=,tag:KC0MS0DTH6j3zIHOjCFOSA==,type:str]
#ENC[AES256_GCM,data:ODXFUxxxdQ==,iv:s9zJVx6wo6x517tbNvC+FZ0dFzqbjqeLI6rXBq72hQA=,tag:bXoV2I3LbpmQyddJrtS3Qg==,type:comment]
#
#
#ENC[AES256_GCM,data:T4Wtn49AAxPd2QUFTR+q,iv:bH5goGWBDqumAat9dUv2OwfCUJUpuVqncTMqMBZUXhI=,tag:G+W6hHA+yftQ+4RJpXrxHg==,type:comment]
switch:
password: ENC[AES256_GCM,data:qu0f9L7A0eFq/UCpaRs=,iv:W8LLOp3MSfd/+EfNEZNf91K8GgI5eUfVPoWTRES2C0Y=,tag:Q5FlAOfwqwJwPvd7k6i+0g==,type:str]
@@ -255,7 +295,7 @@ sops:
UmliaFNxVTBqRkI1QWJpWGpTRWxETW8KEY/8AfU73UOzCGhny1cNnd5dCNv7bHXt
k+uyWPPi+enFkVaceSwMFrA66uaWWrwAj11sXEB7yzvGFPrnAGezjQ==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2026-03-24T06:37:53Z"
mac: ENC[AES256_GCM,data:+by7KiDiod7d0KtLB8jBnuTUtISLkn7WrwW/MrOGnxxqO9JnmD36HeugM782K79Rgymu0osexyvSQ2xpwfDQL/6WjfKkqxXirpeVrHFjjMFrJ3r2Wnn9GoCRf3ObJEXJD8x59IL/fsTDfzGTLaOG71I5Zs7j+LQnrm4Uj3KD6Rg=,iv:lHcuCw7a7j7CkBT183fYMhpQhx97Mz4DYrWYZQYbFNQ=,tag:yAZXT4FrAbwgkespCPdIBA==,type:str]
lastmodified: "2026-05-02T04:55:25Z"
mac: ENC[AES256_GCM,data:4U/SGYS9eNRgRvUEvZh9E0JSctkZzSpdoUYEAbnOVyU+5u8NcG9lbMUAB4kFXb9kHVGBUI5wMwnzg102g96q1IYw5m/k4lrpePceGVNAxxKpWTnkLROhJlL3Z/Bylgq2mj7PVDcGCGEB0xPDgN+ffa7ldCxIikYmSKktISguwYU=,iv:zqS9iJ54FIaNQhnfOl4YY9QcaZLbPekTxlY1AEp3m/s=,tag:TckIRAKRVyxf/UD+jejNng==,type:str]
unencrypted_suffix: _unencrypted
version: 3.12.1
@@ -9,14 +9,14 @@ Image=ghcr.io/actualbudget/actual-server:{{ version['containers']['actualbudget'
ContainerName=actual-budget
HostName=actual-budget
PublishPort=5006:5006
PublishPort={{ services['actualbudget']['ports']['http'] }}:5006
Volume=%h/data/containers/actual-budget:/data:rw
Environment="TZ=Asia/Seoul"
Environment="ACTUAL_OPENID_DISCOVERY_URL=https://authelia.ilnmors.com/.well-known/openid-configuration"
Environment="ACTUAL_OPENID_DISCOVERY_URL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}/.well-known/openid-configuration"
Environment="ACTUAL_OPENID_CLIENT_ID=actual-budget"
Environment="ACTUAL_OPENID_SERVER_HOSTNAME=https://budget.ilnmors.com"
Environment="ACTUAL_OPENID_SERVER_HOSTNAME=https://{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}"
Environment="ACTUAL_OPENID_AUTH_METHOD=oauth2"
Secret=ACTUAL_OPENID_CLIENT_SECRET,type=env
@@ -0,0 +1,49 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=AFFiNE
After=redis_affine.service manticore_affine.service
Wants=redis_affine.service manticore_affine.service
[Container]
Image=ghcr.io/toeverything/affine:{{ version['containers']['affine'] }}
ContainerName=affine
HostName=affine
PublishPort={{ services['affine']['ports']['http'] }}:3010
Volume=%h/data/containers/affine:/root/.affine/storage:rw
Volume=%h/containers/affine/config:/root/.affine/config
Volume=%h/containers/affine/ssl:/etc/ssl/affine:ro
# General
Environment="TZ=Asia/Seoul"
## OIDC callback URIs
Environment="AFFINE_SERVER_HOST={{ services['affine']['domain']['public'] }}.{{ domain['public'] }}"
Environment="AFFINE_SERVER_EXTERNAL_URL=https://{{ services['affine']['domain']['public'] }}.{{ domain['public'] }}"
Environment="AFFINE_SERVER_HTTPS=true"
Secret=AFFINE_PRIVATE_KEY,type=env
# Database
Secret=AFFINE_DATABASE_URL,type=env,target=DATABASE_URL
## Enable AI function: this needs pgvector
# Redis
Environment="REDIS_SERVER_HOST=host.containers.internal"
Environment="REDIS_SERVER_PORT={{ services['affine']['ports']['redis'] }}"
# Indexer
Environment="AFFINE_INDEXER_ENABLED=true"
Environment="AFFINE_INDEXER_SEARCH_ENDPOINT=http://host.containers.internal:{{ services['affine']['ports']['manticore'] }}"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -13,7 +13,7 @@ Image=docker.io/gitea/gitea:{{ version['containers']['gitea'] }}
ContainerName=gitea
HostName=gitea
PublishPort=3000:3000/tcp
PublishPort={{ services['gitea']['ports']['http'] }}:3000/tcp
Volume=%h/data/containers/gitea:/data:rw
Volume=%h/containers/gitea/ssl:/etc/ssl/gitea:ro
@@ -23,18 +23,18 @@ Environment="TZ=Asia/Seoul"
Environment="GITEA__server__DISABLE_SSH=true"
# Database
Environment="GITEA__database__DB_TYPE=postgres"
Environment="GITEA__database__HOST={{ infra_uri['postgresql']['domain'] }}:{{ infra_uri['postgresql']['ports']['tcp'] }}"
Environment="GITEA__database__HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}"
Environment="GITEA__database__NAME=gitea_db"
Environment="GITEA__database__USER=gitea"
Secret=GITEA__database__PASSWD,type=env
Environment="GITEA__database__SSL_MODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/gitea/ilnmors_root_ca.crt"
Environment="PGSSLROOTCERT=/etc/ssl/gitea/{{ root_cert_filename }}"
# OAuth2 client
Environment="GITEA__oauth2_client__ACCOUNT_LINKING=auto"
# OIDC configuration
Environment="GITEA__openid__ENABLE_OPENID_SIGNIN=false"
Environment="GITEA__openid__ENABLE_OPENID_SIGNUP=true"
Environment="GITEA__openid__WHITELISTED_URIS=authelia.ilnmors.com"
Environment="GITEA__openid__WHITELISTED_URIS={{ services['authelia']['domain'] }}.{{ domain['public'] }}"
# automatic create user via authelia
Environment="GITEA__service__DISABLE_REGISTRATION=false"
Environment="GITEA__service__ALLOW_ONLY_EXTERNAL_REGISTRATION=true"
@@ -42,7 +42,7 @@ Environment="GITEA__service__SHOW_REGISTRATION_BUTTON=false"
[Service]
ExecStartPre=/usr/bin/nc -zv {{ infra_uri['postgresql']['domain'] }} {{ infra_uri['postgresql']['ports']['tcp'] }}
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
@@ -13,7 +13,7 @@ Image=ghcr.io/immich-app/immich-machine-learning:{{ version['containers']['immic
ContainerName=immich-ml
HostName=immich-ml
PublishPort=3003:3003
PublishPort={{ services['immich-ml']['ports']['http'] }}:3003
# iGPU access for OpenVINO
AddDevice=/dev/dri:/dev/dri
@@ -13,7 +13,7 @@ Image=ghcr.io/immich-app/immich-server:{{ version['containers']['immich'] }}
ContainerName=immich
HostName=immich
PublishPort=2283:2283
PublishPort={{ services['immich']['ports']['http'] }}:2283
# iGPU access
AddDevice=/dev/dri:/dev/dri
@@ -25,22 +25,26 @@ Volume=%h/containers/immich/ssl:/etc/ssl/immich:ro
# Environment
Environment="TZ=Asia/Seoul"
# The new environment from version 2.7.0 to enable CSP
Environment="IMMICH_HELMET_FILE=true"
# Redis
Environment="REDIS_HOSTNAME=host.containers.internal"
Environment="REDIS_PORT={{ hostvars['app']['redis']['immich'] }}"
Environment="REDIS_PORT={{ services['immich']['ports']['redis'] }}"
Environment="REDIS_DBINDEX=0"
# Database
Environment="DB_HOSTNAME={{ infra_uri['postgresql']['domain'] }}"
Environment="DB_PORT={{ infra_uri['postgresql']['ports']['tcp'] }}"
Environment="DB_HOSTNAME={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="DB_PORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="DB_USERNAME=immich"
Environment="DB_DATABASE_NAME=immich_db"
Environment="DB_PASSWORD_FILE=/run/secrets/DB_PASSWORD"
Environment="DB_SSL_MODE=verify-full"
Environment="NODE_EXTRA_CA_CERTS=/etc/ssl/immich/ilnmors_root_ca.crt"
Environment="NODE_EXTRA_CA_CERTS=/etc/ssl/immich/{{ root_cert_filename }}"
Secret=IMMICH_DB_PASSWORD,target=/run/secrets/DB_PASSWORD
[Service]
ExecStartPre=/usr/bin/nc -zv {{ infra_uri['postgresql']['domain'] }} {{ infra_uri['postgresql']['ports']['tcp'] }}
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
@@ -0,0 +1,25 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Manticore - {{ manticore_service }}
[Container]
Image=docker.io/manticoresearch/manticore:{{ version['containers']['manticore'] }}
ContainerName=manticore_{{ manticore_service }}
HostName=manticore_{{ manticore_service }}
PublishPort={{ services[manticore_service]['ports']['manticore'] }}:9308
Volume=%h/data/containers/manticore/{{ manticore_service }}:/var/lib/manticore:rw
# General
Environment="TZ=Asia/Seoul"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,4 @@
<?php
$CONFIG = [
'maintenance_window_start' => 18,
];
@@ -0,0 +1,12 @@
<?php
$CONFIG = [
'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
'host' => 'host.containers.internal',
'port' => {{ services['nextcloud']['ports']['redis'] }},
'timeout' => 1.5,
'dbindex' => 0,
],
];
@@ -0,0 +1,9 @@
<?php
$CONFIG = [
'trusted_domains' => [
'{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
],
'overwritehost' => '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
'overwriteprotocol' => 'https',
'overwrite.cli.url' => 'https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}',
];
@@ -0,0 +1,4 @@
<?php
$CONFIG = [
'allow_local_remote_servers' => true,
];
@@ -0,0 +1,9 @@
<?php
$CONFIG = [
'user_oidc' => [
'default_token_endpoint_auth_method' => 'client_secret_post',
'auto_provision' => true,
'soft_auto_provision' => true,
'disable_account_creation' => false,
],
];
@@ -0,0 +1,14 @@
; /usr/local/etc/php/conf.d/opcache-recommended.ini
; OPcache tuning
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=32
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.revalidate_freq=60
opcache.fast_shutdown=1
; APCu CLI activate
apc.enable_cli=1
@@ -0,0 +1,6 @@
; /usr/local/etc/php/conf.d/nextcloud-upload.ini
upload_max_filesize=16G
post_max_size=16G
memory_limit=1024M
max_execution_time=3600
max_input_time=3600
@@ -0,0 +1,36 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Nextcloud
[Container]
Image=docker.io/library/nextcloud:{{ version['containers']['nextcloud'] }}
ContainerName=nextcloud
HostName=nextcloud
PublishPort={{ services['nextcloud']['ports']['http'] }}:80
Volume=%h/containers/nextcloud/ssl:/etc/ssl/nextcloud:ro
Volume=%h/containers/nextcloud/ini/opcache.ini:/usr/local/etc/php/conf.d/opcache-recommended.ini:ro
Volume=%h/containers/nextcloud/ini/upload.ini:/usr/local/etc/php/conf.d/upload.ini:ro
Volume=%h/data/containers/nextcloud/html:/var/www/html:rw
# General
Environment="TZ=Asia/Seoul"
# PostgreSQL
Environment="PGSSLMODE=verify-full"
Environment="PGSSLROOTCERT=/etc/ssl/nextcloud/{{ root_cert_filename }}"
## libpq in Nextcloud automatically tries to use a client certificate for mTLS. Therefore, when only TLS is required, then disable the option explicitly.
Environment="PGSSLCERTMODE=disable"
# Redis
Environment="REDIS_HOST=host.containers.internal"
Environment="REDIS_HOST_PORT={{ services['nextcloud']['ports']['redis'] }}"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -0,0 +1,8 @@
[Unit]
Description=Nextcloud cron.php
Requires=nextcloud.service
After=nextcloud.service
[Service]
Type=oneshot
ExecStart=/usr/bin/podman exec -u www-data nextcloud php -f /var/www/html/cron.php
@@ -0,0 +1,10 @@
[Unit]
Description=Run Nextcloud cron every 5 minutes
[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloud-cron.service
[Install]
WantedBy=timers.target
@@ -0,0 +1,38 @@
directives:
child-src:
- '''self'''
connect-src:
- '''self'''
- 'blob:'
- 'https://raw.githubusercontent.com/opencloud-eu/awesome-apps'
- 'https://update.opencloud.eu'
- 'https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}'
# default-src:
# - '''none'''
font-src:
- '''self'''
frame-ancestors:
- '''self'''
frame-src:
- '''self'''
- 'blob:'
img-src:
- '''self'''
- 'data:'
- 'blob:'
manifest-src:
- '''self'''
media-src:
- '''self'''
# object-src:
# - '''none'''
script-src:
- '''self'''
- '''unsafe-inline'''
- '''unsafe-eval'''
style-src:
- '''self'''
- '''unsafe-inline'''
worker-src:
- '''self'''
- 'blob:'
@@ -0,0 +1,17 @@
role_assignment:
driver: "oidc"
oidc_role_mapper:
role_claim: "preferred_username"
role_mapping:
{% for admin_user in ['il'] %}
- role_name: "admin"
claim_value: "{{ admin_user }}"
{% endfor %}
{% for general_user in ['morsalin', 'eunkyoung'] %}
- role_name: "user"
claim_value: "{{ general_user }}"
{% endfor %}
# - role_name: "spaceadmin"
# claim_value: ""
# - role_name: user-light
# claim_value: ""
@@ -0,0 +1,60 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=OpenCloud
[Container]
Image=docker.io/opencloudeu/opencloud:{{ version['containers']['opencloud'] }}
ContainerName=opencloud
HostName=opencloud
PublishPort={{ services['opencloud']['ports']['http'] }}:9200
Volume=%h/containers/opencloud:/etc/opencloud:rw
Volume=%h/data/containers/opencloud:/var/lib/opencloud:rw
# General
Environment="TZ=Asia/Seoul"
# Log level info
Environment="OC_LOG_LEVEL=info"
# TLS configuration
Environment="PROXY_TLS=false"
Environment="OC_INSECURE=true"
# Connection
Environment="PROXY_HTTP_ADDR=0.0.0.0:9200"
Environment="OC_URL=https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}"
## CSP file location: allow authelia public domain
Environment="PROXY_CSP_CONFIG_FILE_LOCATION=/etc/opencloud/csp.yaml"
# OIDC
Environment="OC_OIDC_ISSUER=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="PROXY_OIDC_REWRITE_WELLKNOWN=true"
## OIDC CLIENT CONFIGURATION and SCOPES
Environment="WEB_OIDC_CLIENT_ID=opencloud"
Environment="WEB_OIDC_SCOPE=openid profile email"
## auto sign-in from authelia
Environment="PROXY_AUTOPROVISION_ACCOUNTS=true"
## Stop using internal idP service
Environment="OC_EXCLUDE_RUN_SERVICES=idp"
## Don't limit special characters
Environment="GRAPH_USERNAME_MATCH=none"
# OIDC standard link environments
#Environment="WEB_OIDC_AUTHORITY=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
#Environment="WEBFINGER_OIDC_ISSUER=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
#Environment="OC_OIDC_CLIENT_ID=opencloud"
#Environment="OC_OIDC_CLIENT_SCOPES=openid profile email groups"
#Environment="WEBFINGER_ANDROID_OIDC_CLIENT_ID=opencloud"
#Environment="WEBFINGER_ANDROID_OIDC_CLIENT_SCOPES=openid profile email groups offline_access"
#Environment="WEBFINGER_DESKTOP_OIDC_CLIENT_ID=opencloud"
#Environment="WEBFINGER_DESKTOP_OIDC_CLIENT_SCOPES=openid profile email groups offline_access"
#Environment="WEBFINGER_IOS_OIDC_CLIENT_ID=opencloud"
#Environment="WEBFINGER_IOS_OIDC_CLIENT_SCOPES=openid profile email groups offline_access"
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -11,7 +11,7 @@ Wants=redis_paperless.service
Image=ghcr.io/paperless-ngx/paperless-ngx:{{ version['containers']['paperless'] }}
ContainerName=paperless
HostName=paperless
PublishPort=8001:8000/tcp
PublishPort={{ services['paperless']['ports']['http'] }}:8000/tcp
# Volumes
Volume=%h/data/containers/paperless/data:/usr/src/paperless/data:rw
@@ -21,24 +21,26 @@ Volume=%h/containers/paperless/ssl:/etc/ssl/paperless:ro
# General
Environment="TZ=Asia/Seoul"
Environment="PAPERLESS_URL=https://paperless.ilnmors.com"
Environment="PAPERLESS_TIME_ZONE=Asia/Seoul"
Environment="PAPERLESS_URL=https://{{ services['paperless']['domain']['public'] }}.{{ domain['public'] }}"
Environment="PAPERLESS_OCR_LANGUAGE=kor+eng"
Environment="PAPERLESS_OCR_LANGUAGES=kor"
Environment="PAPERLESS_OCR_MODE=force"
# Environment="PAPERLESS_OCR_MODE=force"
# Environment="PAPERLESS_TASK_WORKERS=1"
# Environment="PAPERLESS_THREADS_PER_WORKER=1"
Environment="PAPERLESS_WORKER_TIMEOUT=7200"
Secret=PAPERLESS_SECRET_KEY,type=env
# Redis
Environment="PAPERLESS_REDIS=redis://host.containers.internal:{{ hostvars['app']['redis']['paperless'] }}"
Environment="PAPERLESS_REDIS=redis://host.containers.internal:{{ services['paperless']['ports']['redis'] }}"
# Database
Environment="PAPERLESS_DBHOST={{ infra_uri['postgresql']['domain'] }}"
Environment="PAPERLESS_DBPORT={{ infra_uri['postgresql']['ports']['tcp'] }}"
Environment="PAPERLESS_DBHOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="PAPERLESS_DBPORT={{ services['postgresql']['ports']['tcp'] }}"
Environment="PAPERLESS_DBNAME=paperless_db"
Environment="PAPERLESS_DBUSER=paperless"
Environment="PAPERLESS_DBSSLMODE=verify-full"
Environment="PAPERLESS_DBSSLROOTCERT=/etc/ssl/paperless/ilnmors_root_ca.crt"
Environment="PAPERLESS_DBSSLROOTCERT=/etc/ssl/paperless/{{ root_cert_filename }}"
Secret=PAPERLESS_DBPASS,type=env
# OIDC
@@ -48,7 +50,7 @@ Environment="PAPERLESS_SOCIALACCOUNT_ALLOW_SIGNUPS=true"
Secret=PAPERLESS_SOCIALACCOUNT_PROVIDERS,type=env
[Service]
ExecStartPre=/usr/bin/nc -zv {{ infra_uri['postgresql']['domain'] }} {{ infra_uri['postgresql']['ports']['tcp'] }}
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
@@ -1,4 +1,4 @@
databases 16
bind 0.0.0.0
port {{ hostvars['app']['redis'][redis_service] }}
port 6379
protected-mode no
@@ -13,7 +13,7 @@ Image=docker.io/library/redis:{{ version['containers']['redis'] }}
ContainerName=redis_{{ redis_service }}
HostName=redis_{{ redis_service }}
PublishPort={{ hostvars['app']['redis'][redis_service] }}:{{ hostvars['app']['redis'][redis_service] }}
PublishPort={{ services[redis_service]['ports']['redis'] }}:6379
Volume=%h/containers/redis/{{ redis_service }}/data:/data:rw
Volume=%h/containers/redis/{{ redis_service }}/redis.conf:/usr/local/etc/redis/redis.conf:ro
@@ -13,19 +13,19 @@ Image=docker.io/vaultwarden/server:{{ version['containers']['vaultwarden'] }}
ContainerName=vaultwarden
HostName=vaultwarden
PublishPort=8000:80/tcp
PublishPort={{ services['vaultwarden']['ports']['http'] }}:80/tcp
Volume=%h/data/containers/vaultwarden:/data:rw
Volume=%h/containers/vaultwarden/ssl:/etc/ssl/vaultwarden:ro
Environment="TZ=Asia/Seoul"
Environment="DOMAIN=https://vault.ilnmors.com"
Environment="DOMAIN=https://{{ services['vaultwarden']['domain']['public'] }}.{{ domain['public'] }}"
Environment="SIGNUPS_ALLOWED=false"
Secret=VW_ADMIN_TOKEN,type=env,target=ADMIN_TOKEN
Secret=VW_DATABASE_URL,type=env,target=DATABASE_URL
[Service]
ExecStartPre=/usr/bin/nc -zv {{ infra_uri['postgresql']['domain'] }} {{ infra_uri['postgresql']['ports']['tcp'] }}
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
@@ -0,0 +1,57 @@
[Quadlet]
DefaultDependencies=false
[Unit]
Description=Vikunja
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/vikunja/vikunja:{{ version['containers']['vikunja'] }}
ContainerName=vikunja
HostName=vikunja
PublishPort={{ services['vikunja']['ports']['http'] }}:3456/tcp
# Volumes
Volume=%h/data/containers/vikunja:/app/vikunja/files:rw
Volume=%h/containers/vikunja/ssl:/etc/ssl/vikunja:ro
# General
Environment="TZ=Asia/Seoul"
Environment="VIKUNJA_DEFAULTSETTINGS_TIMEZONE=Asia/Seoul"
Environment="VIKUNJA_SERVICE_TIMEZONE=Asia/Seoul"
Environment="VIKUNJA_SERVICE_PUBLICURL=https://{{ services['vikunja']['domain']['public'] }}.{{ domain['public'] }}"
Environment="VIKUNJA_SERVICE_ENABLEREGISTRATION=false"
Secret=VIKUNJA_SERVICE_JWTSECRET,type=env
# Database
Environment="VIKUNJA_DATABASE_TYPE=postgres"
Environment="VIKUNJA_DATABASE_HOST={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}"
Environment="VIKUNJA_DATABASE_USER=vikunja"
Environment="VIKUNJA_DATABASE_DATABASE=vikunja_db"
Environment="VIKUNJA_DATABASE_SSLMODE=verify-full"
Environment="VIKUNJA_DATABASE_SSLROOTCERT=/etc/ssl/vikunja/{{ root_cert_filename }}"
Secret=VIKUNJA_DATABASE_PASSWORD,type=env
# OIDC
Environment="VIKUNJA_AUTH_OPENID_ENABLED=true"
Environment="VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_NAME=Authelia"
Environment="VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_AUTHURL=https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}"
Environment="VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_CLIENTID=vikunja"
# Environment="VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_SCOPE=" default value = openid email profile
# Vikunja doesn't support OIDC and local dual login.
# Environment="VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_USERNAMEFALLBACK=true"
# Environment="VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_EMAILFALLBACK=true"
Secret=VIKUNJA_AUTH_OPENID_PROVIDERS_authelia_CLIENTSECRET,type=env
[Service]
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
@@ -15,7 +15,7 @@ ContainerName=authelia
HostName=authelia
# Web UI
PublishPort=9091:9091/tcp
PublishPort={{ services['authelia']['ports']['http'] }}:9091/tcp
Volume=%h/containers/authelia/config:/config:rw
@@ -56,8 +56,9 @@ Exec=--config /config/authelia.yaml
# Wait for dependency
# They run as rootless podman container, so their port is not opened until they are normaly running
# Check their ports with nc command
ExecStartPre=/usr/bin/nc -zv {{ infra_uri['postgresql']['domain'] }} {{ infra_uri['postgresql']['ports']['tcp'] }}
ExecStartPre=/usr/bin/nc -zv {{ infra_uri['ldap']['domain'] }} {{ infra_uri['ldap']['ports']['ldaps'] }}
ExecStartPre=/usr/bin/nc -zv {{ services['postgresql']['domain'] }}.{{ domain['internal'] }} {{ services['postgresql']['ports']['tcp'] }}
# services['ldap']['ports']['ldaps'] is 6360, but nftables works on 636 the original port
ExecStartPre=/usr/bin/nc -zv {{ services['ldap']['domain'] }}.{{ domain['internal'] }} 636
ExecStartPre=sleep 5
Restart=always
RestartSec=10s
@@ -10,7 +10,7 @@ theme: 'auto'
# Server configuration
server:
# TLS will be applied on caddy
address: 'tcp://:9091/'
address: 'tcp://:{{ services['authelia']['ports']['http'] }}/'
# Log configuration
log:
@@ -20,7 +20,7 @@ log:
# TOTP configuration
totp:
# issure option is for 2FA app. It works as identifier. "My homelab' or 'ilnmors.internal', 'Authelia - ilnmors'
issuer: 'ilnmors.internal'
issuer: '{{ domain['internal'] }}'
# Identity validation confituration
identity_validation:
@@ -31,21 +31,21 @@ identity_validation:
authentication_backend:
ldap:
# ldaps uses 636 -> NAT automatically change port 636 in output packet -> 2636 which lldap server uses.
address: 'ldaps://ldap.ilnmors.internal'
address: 'ldaps://{{ services['ldap']['domain'] }}.{{ domain['internal'] }}'
implementation: 'lldap'
# tls configruation, it uses certificates_directory's /etc/ssl/authelia/ilnmors_root_ca.crt
# tls configruation, it uses certificates_directory's /etc/ssl/authelia/{{ root_cert_filename }}
tls:
server_name: 'ldap.ilnmors.internal'
server_name: '{{ services['ldap']['domain'] }}.{{ domain['internal'] }}'
skip_verify: false
# LLDAP base DN
base_dn: 'dc=ilnmors,dc=internal'
base_dn: '{{ domain['dc'] }}'
additional_users_dn: 'ou=people'
additional_groups_dn: 'ou=groups'
# LLDAP filters
users_filter: '(&(|({username_attribute}={input})({mail_attribute}={input}))(objectClass=person))'
groups_filter: '(&(member={dn})(objectClass=groupOfNames))'
# LLDAP bind account configuration
user: 'uid=authelia,ou=people,dc=ilnmors,dc=internal'
user: 'uid=authelia,ou=people,{{ domain['dc'] }}'
password: '' # $AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE option is designated in container file
# Access control configuration
@@ -53,14 +53,12 @@ access_control:
default_policy: 'deny'
rules:
# authelia portal
- domain: 'authelia.ilnmors.internal'
- domain: '{{ services['authelia']['domain'] }}.{{ domain['public'] }}'
policy: 'bypass'
- domain: 'authelia.ilnmors.com'
policy: 'bypass'
- domain: 'test.ilnmors.com'
policy: 'one_factor'
subject:
- 'group:admins'
# - domain: 'test.ilnmors.com'
# policy: 'one_factor'
# subject:
# - 'group:admins'
# Session provider configuration
session:
secret: '' # $AUTHELIA_SESSION_SECRET_FILE is designated in container file
@@ -68,8 +66,8 @@ session:
inactivity: '24 hours' # Session maintains for 24 hours without actions
cookies:
- name: 'authelia_public_session'
domain: 'ilnmors.com'
authelia_url: 'https://authelia.ilnmors.com'
domain: '{{ domain['public'] }}'
authelia_url: 'https://{{ services['authelia']['domain'] }}.{{ domain['public'] }}'
same_site: 'lax'
# This authelia doesn't use Redis.
@@ -78,12 +76,12 @@ session:
storage:
encryption_key: '' # $AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE is designated in container file
postgres:
address: 'tcp://{{ infra_uri['postgresql']['domain'] }}:{{ infra_uri['postgresql']['ports']['tcp'] }}'
address: 'tcp://{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}'
database: 'authelia_db'
username: 'authelia'
password: '' # $AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE is designated in container file
tls:
server_name: '{{ infra_uri['postgresql']['domain'] }}'
server_name: '{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}'
skip_verify: false
# Notification provider
@@ -95,13 +93,24 @@ notifier:
identity_providers:
oidc:
hmac_secret: '' # $AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
# For the app which doesn't use secret.
cors:
endpoints:
- 'authorization'
- 'token'
- 'revocation'
- 'introspection'
- 'userinfo'
allowed_origins:
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}'
allowed_origins_from_client_redirect_uris: true
jwks:{% raw %}
- algorithm: 'RS256'
use: 'sig'
key: {{ secret "/run/secrets/AUTHELIA_JWKS_RS256" | mindent 10 "|" | msquote }}
- algorithm: 'ES256'
use: 'sig'
key: {{ secret "/run/secrets/AUTHELIA_JWKS_ES256" | mindent 10 "|" | msquote }}{% endraw %}
key: {{ secret "/run/secrets/AUTHELIA_JWKS_ES256" | mindent 10 "|" | msquote }}{% endraw %}
clients:
# https://www.authelia.com/integration/openid-connect/clients/synology-dsm/
- client_id: 'dsm'
@@ -117,7 +126,7 @@ identity_providers:
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://{{ infra_uri['nas']['domain'] }}:{{ infra_uri['nas']['ports']['https'] }}'
- 'https://{{ services['nas']['domain'] }}.{{ domain['internal'] }}:{{ services['nas']['ports']['https'] }}'
scopes:
- 'openid'
- 'profile'
@@ -140,7 +149,7 @@ identity_providers:
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://gitea.ilnmors.com/user/oauth2/authelia/callback'
- 'https://{{ services['gitea']['domain']['public'] }}.{{ domain['public'] }}/user/oauth2/authelia/callback'
scopes:
- 'openid'
- 'email'
@@ -161,8 +170,8 @@ identity_providers:
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://immich.ilnmors.com/auth/login'
- 'https://immich.ilnmors.com/user-settings'
- 'https://{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}/auth/login'
- 'https://{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}/user-settings'
- 'app.immich:///oauth-callback'
scopes:
- 'openid'
@@ -184,7 +193,7 @@ identity_providers:
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://budget.ilnmors.com/openid/callback'
- 'https://{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}/openid/callback'
scopes:
- 'openid'
- 'profile'
@@ -206,7 +215,166 @@ identity_providers:
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://paperless.ilnmors.com/accounts/oidc/authelia/login/callback/'
- 'https://{{ services['paperless']['domain']['public'] }}.{{ domain['public'] }}/accounts/oidc/authelia/login/callback/'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/vikunja/
- client_id: 'vikunja'
client_name: 'Vikunja'
client_secret: '{{ hostvars['console']['vikunja']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://{{ services['vikunja']['domain']['public'] }}.{{ domain['public'] }}/auth/openid/authelia'
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_basic'
# OpenCloud configuration
## https://docs.opencloud.eu/docs/admin/configuration/authentication-and-user-management/external-idp/
## Web
- client_id: 'opencloud'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}/'
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}/oidc-callback.html'
- 'https://{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}/oidc-silent-redirect.html'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## desktop
- client_id: 'OpenCloudDesktop'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'http://localhost'
- 'http://127.0.0.1'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## Android
- client_id: 'OpenCloudAndroid'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'oc://android.opencloud.eu'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
## IOS
- client_id: 'OpenCloudIOS'
client_name: 'OpenCloud'
public: true
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'oc://ios.opencloud.eu'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
- 'offline_access'
response_types:
- 'code'
grant_types:
- 'authorization_code'
- 'refresh_token'
access_token_signed_response_alg: 'RS256'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'none'
# https://docs.affine.pro/self-host-affine/administer/oauth-2-0
- client_id: 'affine'
client_name: 'Affine'
client_secret: '{{ hostvars['console']['affine']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: false
pkce_challenge_method: ''
redirect_uris:
- 'https://{{ services['affine']['domain']['public'] }}.{{ domain['public'] }}/oauth/callback'
scopes:
- 'openid'
- 'profile'
- 'email'
response_types:
- 'code'
grant_types:
- 'authorization_code'
access_token_signed_response_alg: 'none'
userinfo_signed_response_alg: 'none'
token_endpoint_auth_method: 'client_secret_post'
# https://www.authelia.com/integration/openid-connect/clients/nextcloud/#openid-connect-user-backend-app
- client_id: 'nextcloud'
client_name: 'Nextcloud'
client_secret: '{{ hostvars['console']['nextcloud']['oidc']['hash'] }}'
public: false
authorization_policy: 'one_factor'
require_pkce: true
pkce_challenge_method: 'S256'
redirect_uris:
- 'https://{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}/apps/user_oidc/code'
scopes:
- 'openid'
- 'profile'
@@ -12,6 +12,6 @@ RUN xcaddy build \
FROM docker.io/library/caddy:{{ version['containers']['caddy'] }}
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY ./ilnmors_root_ca.crt /usr/local/share/ca-certificates/ilnmors_root_ca.crt
COPY ./{{ root_cert_filename }} /usr/local/share/ca-certificates/{{ root_cert_filename }}
RUN update-ca-certificates
@@ -14,18 +14,18 @@ Wants=network-online.target
[Container]
Image=ilnmors.internal/{{ node['name'] }}/caddy:{{ version['containers']['caddy'] }}
Image={{ domain['internal'] }}/{{ node['name'] }}/caddy:{{ version['containers']['caddy'] }}
ContainerName=caddy_{{ node['name'] }}
HostName=caddy_{{ node['name'] }}
{% if node['name'] == 'infra' %}
AddHost={{ infra_uri['ca']['domain'] }}:host-gateway
AddHost={{ infra_uri['prometheus']['domain'] }}:host-gateway
AddHost={{ infra_uri['loki']['domain'] }}:host-gateway
AddHost={{ services['ca']['domain'] }}.{{ domain['internal'] }}:host-gateway
AddHost={{ services['prometheus']['domain'] }}.{{ domain['internal'] }}:host-gateway
AddHost={{ services['loki']['domain'] }}.{{ domain['internal'] }}:host-gateway
{% endif %}
PublishPort=2080:80/tcp
PublishPort=2443:443/tcp
PublishPort={{ services['caddy']['ports']['http'] }}:80/tcp
PublishPort={{ services['caddy']['ports']['https'] }}:443/tcp
Volume=%h/containers/caddy/etc:/etc/caddy:ro
Volume=%h/containers/caddy/data:/data:rw
@@ -8,19 +8,19 @@
(private_tls) {
tls {
issuer acme {
dir https://{{ infra_uri['ca']['domain'] }}:{{ infra_uri['ca']['ports']['https'] }}/acme/acme@ilnmors.internal/directory
dir https://{{ services['ca']['domain'] }}.{{ domain['internal'] }}:{{ services['ca']['ports']['https'] }}/acme/acme@{{ domain['internal'] }}/directory
dns rfc2136 {
server {{ infra_uri['bind']['domain'] }}:{{ infra_uri['bind']['ports']['dns'] }}
server {{ services['bind']['domain'] }}.{{ domain['internal'] }}:{{ services['bind']['ports']['dns'] }}
key_name acme-key
key_alg hmac-sha256
key "{file./run/secrets/CADDY_ACME_KEY}"
}
resolvers {{ infra_uri['bind']['domain'] }}
resolvers {{ services['bind']['domain'] }}.{{ domain['internal'] }}
}
}
}
app.ilnmors.internal {
{{ node['name'] }}.{{ domain['internal'] }} {
import private_tls
metrics
}
@@ -29,33 +29,57 @@ app.ilnmors.internal {
# root * /usr/share/caddy
# file_server
# }
vault.app.ilnmors.internal {
{{ services['vaultwarden']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:8000 {
reverse_proxy host.containers.internal:{{ services['vaultwarden']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
gitea.app.ilnmors.internal {
{{ services['gitea']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:3000 {
reverse_proxy host.containers.internal:{{ services['gitea']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
immich.app.ilnmors.internal {
{{ services['immich']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:2283 {
reverse_proxy host.containers.internal:{{ services['immich']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
budget.app.ilnmors.internal {
{{ services['actualbudget']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:5006 {
reverse_proxy host.containers.internal:{{ services['actualbudget']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
paperless.app.ilnmors.internal {
{{ services['paperless']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:8001 {
reverse_proxy host.containers.internal:{{ services['paperless']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['vikunja']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['vikunja']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['opencloud']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['opencloud']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['affine']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['affine']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
{{ services['nextcloud']['domain']['internal'] }}.{{ domain['internal'] }} {
import private_tls
reverse_proxy host.containers.internal:{{ services['nextcloud']['ports']['http'] }} {
header_up Host {http.request.header.X-Forwarded-Host}
}
}
@@ -1,7 +1,7 @@
{
# CrowdSec LAPI connection
crowdsec {
api_url https://{{ infra_uri['crowdsec']['domain'] }}:{{ infra_uri['crowdsec']['ports']['https'] }}
api_url https://{{ services['crowdsec']['domain'] }}.{{ domain['internal'] }}:{{ services['crowdsec']['ports']['https'] }}
api_key "{file./run/secrets/CADDY_CROWDSEC_KEY}"
}
}
@@ -15,31 +15,31 @@
roll_size 100MiB
roll_keep 1
}
format json
format json
}
}
# Private TLS ACME with DNS-01-challenge
(private_tls) {
tls {
issuer acme {
dir https://{{ infra_uri['ca']['domain'] }}:{{ infra_uri['ca']['ports']['https'] }}/acme/acme@ilnmors.internal/directory
dir https://{{ services['ca']['domain'] }}.{{ domain['internal'] }}:{{ services['ca']['ports']['https'] }}/acme/acme@{{ domain['internal'] }}/directory
dns rfc2136 {
server {{ infra_uri['bind']['domain'] }}:{{ infra_uri['bind']['ports']['dns'] }}
server {{ services['bind']['domain'] }}.{{ domain['internal'] }}:{{ services['bind']['ports']['dns'] }}
key_name acme-key
key_alg hmac-sha256
key "{file./run/secrets/CADDY_ACME_KEY}"
}
resolvers {{ infra_uri['bind']['domain'] }}
resolvers {{ services['bind']['domain'] }}.{{ domain['internal'] }}
}
}
}
# Public domain
authelia.ilnmors.com {
{{ services['authelia']['domain'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy host.containers.internal:9091
reverse_proxy host.containers.internal:{{ services['authelia']['ports']['http'] }}
}
}
# test.ilnmors.com {
@@ -64,54 +64,90 @@ authelia.ilnmors.com {
# }
# }
# }
vault.ilnmors.com {
{{ services['vaultwarden']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://vault.app.ilnmors.internal {
reverse_proxy https://{{ services['vaultwarden']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
gitea.ilnmors.com {
{{ services['gitea']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://gitea.app.ilnmors.internal {
reverse_proxy https://{{ services['gitea']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
immich.ilnmors.com {
{{ services['immich']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://immich.app.ilnmors.internal {
reverse_proxy https://{{ services['immich']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
budget.ilnmors.com {
{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://budget.app.ilnmors.internal {
reverse_proxy https://{{ services['actualbudget']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
paperless.ilnmors.com {
{{ services['paperless']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://paperless.app.ilnmors.internal {
reverse_proxy https://{{ services['paperless']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['vikunja']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{ services['vikunja']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{ services['opencloud']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['affine']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{ services['affine']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }} {
import crowdsec_log
route {
crowdsec
reverse_proxy https://{{services['nextcloud']['domain']['internal'] }}.{{ domain['internal'] }} {
header_up Host {http.reverse_proxy.upstream.host}
}
}
}
# Internal domain
auth.ilnmors.internal {
{{ node['name'] }}.{{ domain['internal'] }} {
import private_tls
metrics
}
@@ -2,40 +2,40 @@
(private_tls) {
tls {
issuer acme {
dir https://{{ infra_uri['ca']['domain'] }}:{{ infra_uri['ca']['ports']['https'] }}/acme/acme@ilnmors.internal/directory
dir https://{{ services['ca']['domain'] }}.{{ domain['internal'] }}:{{ services['ca']['ports']['https'] }}/acme/acme@{{ domain['internal'] }}/directory
dns rfc2136 {
server {{ infra_uri['bind']['domain'] }}:{{ infra_uri['bind']['ports']['dns'] }}
server {{ services['bind']['domain'] }}.{{ domain['internal'] }}:{{ services['bind']['ports']['dns'] }}
key_name acme-key
key_alg hmac-sha256
key "{file./run/secrets/CADDY_ACME_KEY}"
}
resolvers {{ infra_uri['bind']['domain'] }}
resolvers {{ services['bind']['domain'] }}.{{ domain['internal'] }}
}
}
}
infra.ilnmors.internal {
{{ node['name'] }}.{{ domain['internal'] }} {
import private_tls
metrics
}
{{ infra_uri['ldap']['domain'] }} {
{{ services['ldap']['domain'] }}.{{ domain['internal'] }} {
import private_tls
route {
reverse_proxy host.containers.internal:{{ infra_uri['ldap']['ports']['http'] }}
reverse_proxy host.containers.internal:{{ services['ldap']['ports']['http'] }}
}
}
{{ infra_uri['prometheus']['domain'] }} {
{{ services['prometheus']['domain'] }}.{{ domain['internal'] }} {
import private_tls
route {
reverse_proxy https://{{ infra_uri['prometheus']['domain'] }}:{{ infra_uri['prometheus']['ports']['https'] }}
reverse_proxy https://{{ services['prometheus']['domain'] }}.{{ domain['internal'] }}:{{ services['prometheus']['ports']['https'] }}
}
}
grafana.ilnmors.internal {
{{ services['grafana']['domain'] }}.{{ domain['internal'] }} {
import private_tls
route {
reverse_proxy host.containers.internal:3000
reverse_proxy host.containers.internal:{{ services['grafana']['ports']['http'] }}
}
}
@@ -13,7 +13,7 @@ Image=docker.io/smallstep/step-ca:{{ version['containers']['step'] }}
ContainerName=ca
HostName=ca
PublishPort=9000:9000/tcp
PublishPort={{ services['ca']['ports']['https'] }}:9000/tcp
Volume=%h/containers/ca/certs:/home/step/certs:ro
Volume=%h/containers/ca/secrets:/home/step/secrets:ro
@@ -22,14 +22,17 @@ Volume=%h/containers/ca/db:/home/step/db:rw
Volume=%h/containers/ca/templates:/home/step/templates:rw
Environment="TZ=Asia/Seoul"
Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD"
# Since 0.30.0, Docker CMD no longer expands PWDPATH.
#Environment="PWDPATH=/run/secrets/STEP_CA_PASSWORD"
Secret=STEP_CA_PASSWORD,target=/run/secrets/STEP_CA_PASSWORD
Exec=/usr/local/bin/step-ca --password-file /run/secrets/STEP_CA_PASSWORD /home/step/config/ca.json
[Service]
Restart=always
RestartSec=10s
TimeoutStopSec=120
[Install]
WantedBy=default.target
WantedBy=default.target
@@ -1,12 +1,12 @@
{
"root": "/home/step/certs/ilnmors_root_ca.crt",
"root": "/home/step/certs/{{ root_cert_filename }}",
"federatedRoots": null,
"crt": "/home/step/certs/ilnmors_intermediate_ca.crt",
"key": "/home/step/secrets/ilnmors_intermediate_ca.key",
"crt": "/home/step/certs/{{ intermediate_cert_filename }}",
"key": "/home/step/secrets/{{ intermediate_key_filename }}",
"address": ":9000",
"insecureAddress": "",
"dnsNames": [
"{{ infra_uri['ca']['domain'] }}"
"{{ services['ca']['domain'] }}.{{ domain['internal'] }}"
],
"logger": {
"format": "text"
@@ -21,9 +21,9 @@
"x509": {
"allow": {
"dns": [
"ilnmors.internal",
"*.ilnmors.internal",
"*.app.ilnmors.internal"
"{{ domain['internal'] }}",
"*.{{ domain['internal'] }}",
"*.app.{{ domain['internal'] }}"
]
},
"allowWildcardNames": true
@@ -32,7 +32,7 @@
"provisioners": [
{
"type": "ACME",
"name": "acme@ilnmors.internal",
"name": "acme@{{ domain['internal'] }}",
"claims": {
"defaultTLSCertDuration": "2160h0m0s",
"enableSSHCA": true,
@@ -58,5 +58,5 @@
"maxVersion": 1.3,
"renegotiation": false
},
"commonName": "ilnmors Online CA"
"commonName": "{{ domain['internal'] }} Online CA"
}
@@ -1,6 +1,6 @@
{
"ca-url": "https://{{ infra_uri['ca']['domain'] }}:{{ infra_uri['ca']['ports']['https'] }}",
"ca-url": "https://{{ services['ca']['domain'] }}.{{ domain['internal'] }}:{{ services['ca']['ports']['https'] }}",
"ca-config": "/home/step/config/ca.json",
"fingerprint": "215c851d2d0d2dbf90fc3507425207c29696ffd587c640c94a68dddb1d84d8e8",
"root": "/home/step/certs/ilnmors_root_ca.crt"
"root": "/home/step/certs/{{ root_cert_filename }}"
}
@@ -7,19 +7,19 @@ provisioning = /etc/grafana/provisioning
[server]
protocol = http
http_port = 3000
domain = grafana.ilnmors.internal
root_url = http://grafana.ilnmors.internal/
http_port = {{ services['grafana']['ports']['http'] }}
domain = {{ services['grafana']['domain'] }}.{{ domain['internal'] }}
root_url = http://{{ services['grafana']['domain'] }}.{{ domain['internal'] }}/
router_logging = false
[database]
type = postgres
host = {{ infra_uri['postgresql']['domain'] }}:{{ infra_uri['postgresql']['ports']['tcp'] }}
host = {{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}
name = grafana_db
user = grafana
password = $__file{/run/secrets/GF_DB_PASSWORD}
ssl_mode = verify-full
ca_cert_path = /etc/ssl/grafana/ilnmors_root_ca.crt
ca_cert_path = /etc/ssl/grafana/{{ root_cert_filename }}
[auth.ldap]
enabled = true
@@ -1,7 +1,7 @@
# https://github.com/lldap/lldap/blob/main/example_configs/grafana_ldap_config.toml
[[servers]]
host = "{{ infra_uri['ldap']['domain'] }}"
port = {{ infra_uri['ldap']['ports']['ldaps'] }}
host = "{{ services['ldap']['domain'] }}.{{ domain['internal'] }}"
port = {{ services['ldap']['ports']['ldaps'] }}
# Activate STARTTLS or LDAPS
use_ssl = true
# true = STARTTLS, false = LDAPS
@@ -9,16 +9,16 @@ start_tls = false
tls_ciphers = []
min_tls_version = ""
ssl_skip_verify = false
root_ca_cert = "/etc/ssl/grafana/ilnmors_root_ca.crt"
root_ca_cert = "/etc/ssl/grafana/{{ root_cert_filename }}"
# mTLS option, it is not needed
# client_cert = "/path/to/client.crt"
# client_key = "/path/to/client.key"
bind_dn = "uid=grafana,ou=people,dc=ilnmors,dc=internal"
bind_dn = "uid=grafana,ou=people,{{ domain['dc'] }}"
bind_password = "$__file{/run/secrets/LDAP_BIND_PASSWORD}"
search_filter = "(|(uid=%s)(mail=%s))"
search_base_dns = ["dc=ilnmors,dc=internal"]
search_base_dns = ["{{ domain['dc'] }}"]
[servers.attributes]
member_of = "memberOf"
@@ -28,20 +28,20 @@ surname = "sn"
username = "uid"
group_search_filter = "(&(objectClass=groupOfUniqueNames)(uniqueMember=%s))"
group_search_base_dns = ["ou=groups,dc=ilnmors,dc=internal"]
group_search_base_dns = ["ou=groups,{{ domain['dc'] }}"]
group_search_filter_user_attribute = "uid"
[[servers.group_mappings]]
group_dn = "cn=lldap_admin,ou=groups,dc=ilnmors,dc=internal"
group_dn = "cn=lldap_admin,ou=groups,{{ domain['dc'] }}"
org_role = "Admin"
grafana_admin = true
[[servers.group_mappings]]
group_dn = "cn=admins,ou=groups,dc=ilnmors,dc=internal"
group_dn = "cn=admins,ou=groups,{{ domain['dc'] }}"
org_role = "Editor"
grafana_admin = false
[[servers.group_mappings]]
group_dn = "cn=users,ou=groups,dc=ilnmors,dc=internal"
group_dn = "cn=users,ou=groups,{{ domain['dc'] }}"
org_role = "Viewer"
grafana_admin = false
@@ -4,7 +4,7 @@ apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: https://prometheus.ilnmors.internal:9090
url: https://{{ services['prometheus']['domain'] }}.{{ domain['internal'] }}:{{ services['prometheus']['ports']['https'] }}
access: proxy
isDefault: true
jsonData:
@@ -12,11 +12,11 @@ datasources:
tlsAuthWithCACert: true
httpMethod: POST
secureJsonData:
tlsCACert: "$__file{/etc/ssl/grafana/ilnmors_root_ca.crt}"
tlsCACert: "$__file{/etc/ssl/grafana/{{ root_cert_filename }}}"
- name: Loki
type: loki
url: https://loki.ilnmors.internal:3100
url: https://{{ services['loki']['domain'] }}.{{ domain['internal'] }}:{{ services['loki']['ports']['https'] }}
access: proxy
jsonData:
tlsAuth: false
@@ -25,5 +25,5 @@ datasources:
httpHeaderName1: "X-Scope-OrgID"
maxLines: 1000
secureJsonData:
tlsCACert: "$__file{/etc/ssl/grafana/ilnmors_root_ca.crt}"
httpHeaderValue1: "ilnmors.internal"
tlsCACert: "$__file{/etc/ssl/grafana/{{ root_cert_filename }}}"
httpHeaderValue1: "{{ domain['internal'] }} "
@@ -13,12 +13,12 @@ Image=docker.io/grafana/grafana:{{ version['containers']['grafana'] }}
ContainerName=grafana
HostName=grafana
AddHost={{ infra_uri['postgresql']['domain'] }}:host-gateway
AddHost={{ infra_uri['ldap']['domain'] }}:host-gateway
AddHost={{ infra_uri['prometheus']['domain'] }}:host-gateway
AddHost={{ infra_uri['loki']['domain'] }}:host-gateway
AddHost={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:host-gateway
AddHost={{ services['ldap']['domain'] }}.{{ domain['internal'] }}:host-gateway
AddHost={{ services['prometheus']['domain'] }}.{{ domain['internal'] }}:host-gateway
AddHost={{ services['loki']['domain'] }}.{{ domain['internal'] }}:host-gateway
PublishPort=3000:3000/tcp
PublishPort={{ services['grafana']['ports']['http'] }}:3000/tcp
Volume=%h/containers/grafana/data:/var/lib/grafana:rw
Volume=%h/containers/grafana/etc:/etc/grafana:ro
@@ -13,11 +13,11 @@ Image=docker.io/lldap/lldap:{{ version['containers']['ldap'] }}
ContainerName=ldap
HostName=ldap
# They are at the same host (for Pasta, it is needed)
AddHost={{ infra_uri['postgresql']['domain'] }}:host-gateway
AddHost={{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:host-gateway
# For LDAPS - 636 > 6360 nftables
PublishPort=6360:6360/tcp
PublishPort={{ services['ldap']['ports']['ldaps'] }}:6360/tcp
# Web UI
PublishPort=17170:17170/tcp
PublishPort={{ services['ldap']['ports']['http'] }}:17170/tcp
Volume=%h/containers/ldap/data:/data:rw
@@ -27,7 +27,7 @@ Volume=%h/containers/ldap/ssl:/etc/ssl/ldap:ro
Environment="TZ=Asia/Seoul"
# Domain
Environment="LLDAP_LDAP_BASE_DN=dc=ilnmors,dc=internal"
Environment="LLDAP_LDAP_BASE_DN={{ domain['dc'] }}"
# LDAPS
Environment="LLDAP_LDAPS_OPTIONS__ENABLED=true"
@@ -1,7 +1,7 @@
---
server:
http_listen_address: "::"
http_listen_port: 3100
http_listen_port: {{ services['loki']['ports']['https'] }}
http_tls_config:
cert_file: /etc/ssl/loki/loki.crt
key_file: /etc/ssl/loki/loki.key
@@ -13,7 +13,7 @@ Image=docker.io/grafana/loki:{{ version['containers']['loki'] }}
ContainerName=loki
HostName=loki
PublishPort=3100:3100/tcp
PublishPort={{ services['loki']['ports']['https'] }}:3100/tcp
Volume=%h/containers/loki/data:/loki:rw
Volume=%h/containers/loki/etc:/etc/loki:ro
@@ -8,11 +8,11 @@ listen_addresses = '*'
# Max connections
max_connections = 250
# listen_port
port = 5432
port = {{ services['postgresql']['ports']['tcp'] }}
# SSL
ssl = on
ssl_ca_file = '/etc/ssl/postgresql/ilnmors_root_ca.crt'
ssl_ca_file = '/etc/ssl/postgresql/{{ root_cert_filename }}'
ssl_cert_file = '/etc/ssl/postgresql/postgresql.crt'
ssl_key_file = '/etc/ssl/postgresql/postgresql.key'
ssl_ciphers = 'HIGH:!aNULL:!MD5'
@@ -8,12 +8,12 @@ After=network-online.target
Wants=network-online.target
[Container]
Image=ilnmors.internal/{{ node['name'] }}/postgres:pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}
Image={{ domain['internal'] }}/{{ node['name'] }}/postgres:pg{{ version['containers']['postgresql'] }}-vectorchord{{ version['containers']['vectorchord'] }}
ContainerName=postgresql
HostName=postgresql
PublishPort=5432:5432/tcp
PublishPort={{ services['postgresql']['ports']['tcp'] }}:5432/tcp
Volume=%h/containers/postgresql/data:/var/lib/postgresql:rw
Volume=%h/containers/postgresql/config:/config:ro
@@ -23,8 +23,8 @@ scrape_configs:
# metrics_path defaults to '/metrics'
scheme: "https"
tls_config:
ca_file: "/etc/ssl/prometheus/ilnmors_root_ca.crt"
server_name: "{{ infra_uri['prometheus']['domain'] }}"
ca_file: "/etc/ssl/prometheus/{{ root_cert_filename }}"
server_name: "{{ services['prometheus']['domain'] }}.{{ domain['internal'] }}"
static_configs:
- targets: ["localhost:9090"]
# The label name is added as a label `label_name=<label_value>` to any timeseries scraped from this config.
@@ -13,7 +13,7 @@ Image=docker.io/prom/prometheus:{{ version['containers']['prometheus'] }}
ContainerName=prometheus
HostName=prometheus
PublishPort=9090:9090/tcp
PublishPort={{ services['prometheus']['ports']['https'] }}:9090/tcp
Volume=%h/containers/prometheus/data:/prometheus:rw
Volume=%h/containers/prometheus/etc:/etc/prometheus:ro
@@ -13,7 +13,7 @@ HostName=X509-exporter
Volume=%h/containers/x509-exporter/certs:/certs:ro
PublishPort=9793:9793
PublishPort={{ services['x509-exporter']['ports']['http'] }}:9793
Exec=--listen-address :9793 --watch-dir=/certs
@@ -6,7 +6,7 @@
//// Metric ouput
prometheus.remote_write "prometheus" {
endpoint {
url = "https://{{ infra_uri['prometheus']['domain'] }}:{{ infra_uri['prometheus']['ports']['https'] }}/api/v1/write"
url = "https://{{ services['prometheus']['domain'] }}.{{ domain['internal'] }}:{{ services['prometheus']['ports']['https'] }}/api/v1/write"
}
}
@@ -71,8 +71,8 @@ prometheus.scrape "system" {
////// For Crowdsec metrics
prometheus.scrape "crowdsec" {
targets = [
{ "__address__" = "{{ infra_uri['crowdsec']['domain'] }}:6060", "job" = "crowdsec" },
{ "__address__" = "{{ infra_uri['crowdsec']['domain'] }}:60601", "job" = "crowdsec-bouncer" },
{ "__address__" = "{{ services['crowdsec']['domain'] }}.{{ domain['internal'] }}:6060", "job" = "crowdsec" },
{ "__address__" = "{{ services['crowdsec']['domain'] }}.{{ domain['internal'] }}:60601", "job" = "crowdsec-bouncer" },
]
honor_labels = true
forward_to = [prometheus.relabel.default_label.receiver]
@@ -83,7 +83,7 @@ prometheus.scrape "crowdsec" {
////// For postgresql metrics
prometheus.exporter.postgres "postgresql" {
data_source_names = [
"postgres://alloy@{{ infra_uri['postgresql']['domain'] }}:{{ infra_uri['postgresql']['ports']['tcp'] }}/postgres?sslmode=verify-full",
"postgres://alloy@{{ services['postgresql']['domain'] }}.{{ domain['internal'] }}:{{ services['postgresql']['ports']['tcp'] }}/postgres?sslmode=verify-full",
]
}
prometheus.scrape "postgresql" {
@@ -93,7 +93,7 @@ prometheus.scrape "postgresql" {
///// For certificates metrics
prometheus.scrape "x509" {
targets = [
{ "__address__" = "{{ node['name'] }}.ilnmors.internal:9793" },
{ "__address__" = "{{ node['name'] }}.{{ domain['internal'] }}:{{ services['x509-exporter']['ports']['http'] }}" },
]
forward_to = [prometheus.relabel.default_label.receiver]
}
@@ -103,7 +103,7 @@ prometheus.scrape "x509" {
////// For Input Caddy metrics
prometheus.scrape "caddy" {
targets = [
{ "__address__" = "{{ node['name'] }}.ilnmors.internal:443" },
{ "__address__" = "{{ node['name'] }}.{{ domain['internal'] }}:443" },
]
scheme = "https"
forward_to = [prometheus.relabel.default_label.receiver]
@@ -114,8 +114,8 @@ prometheus.scrape "caddy" {
//// Logs output
loki.write "loki" {
endpoint {
url = "https://{{ infra_uri['loki']['domain'] }}:{{ infra_uri['loki']['ports']['https'] }}/loki/api/v1/push"
tenant_id = "ilnmors.internal"
url = "https://{{ services['loki']['domain'] }}.{{ domain['internal'] }}:{{ services['loki']['ports']['https'] }}/loki/api/v1/push"
tenant_id = "{{ domain['internal'] }}"
}
}
//// Logs relabel
@@ -203,12 +203,11 @@ loki.relabel "caddy_relabel" {
loki.process "journal_parser" {
forward_to = [loki.write.loki.receiver]
// Severity parsing
// If content of log includes "level" information, change the level
stage.logfmt {
mapping = {
"content_level" = "level",
}
stage.regex {
// Regex to extract the log level from the content.
expression = "(?i)(?:level[\"\\s:=]+|\\[|\\s|^)(?P<content_level>info|warn|warning|error|debug|fatal|critical|trace)(?:[\"\\]\\s]|$)"
}
stage.labels {
values = {
"level" = "content_level",
@@ -8,7 +8,7 @@ log_compression: true
log_max_size: 100
log_max_backups: 3
log_max_age: 30
api_url: "https://{{ infra_uri['crowdsec']['domain'] }}:{{ infra_uri['crowdsec']['ports']['https'] }}"
api_url: "https://{{ services['crowdsec']['domain'] }}.{{ domain['internal'] }}:{{ services['crowdsec']['ports']['https'] }}"
api_key: "{{ hostvars['console']['crowdsec']['bouncer']['fw'] }}"
insecure_skip_verify: false
disable_ipv6: false
@@ -13,7 +13,14 @@ whitelist:
{% if node['name'] == 'auth' %}
expression:
# budget local-first sql scrap rule
- "evt.Meta.target_fqdn == 'budget.ilnmors.com' && evt.Meta.http_path contains '/data/migrations/'"
- "evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/data/migrations/'"
# immich thumbnail request 404 error false positive
- "evt.Meta.target_fqdn == 'immich.ilnmors.com' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'"
- "evt.Meta.target_fqdn == '{{ services['immich']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'"
# opencloud chunk request false positive
- "evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/js/chunks/'"
# nextcloud chunk request false positive (crowdsecurity/http-crawl-non_statics)
- "evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/apps/viewer/js/'"
- "evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/dist/'"
# nextcloud upload directory request 404 error false positive (crowdsecurity/http-probing)
- "evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/remote.php/dav/files/'"
{% endif %}
@@ -1,3 +1,3 @@
url: https://{{ infra_uri['crowdsec']['domain'] }}:{{ infra_uri['crowdsec']['ports']['https'] }}
url: https://{{ services['crowdsec']['domain'] }}.{{ domain['internal'] }}:{{ services['crowdsec']['ports']['https'] }}
login: {{ node['name'] }}
password: {{ hostvars['console']['crowdsec']['machine'][node['name']] }}
@@ -21,9 +21,9 @@ ProtectHome=tmpfs
InaccessiblePaths=/boot /root
{% if node['name'] == 'infra' %}
BindReadOnlyPaths=/home/infra/containers/postgresql/backups
BindReadOnlyPaths=%h/containers/postgresql/backups
{% elif node['name'] == 'app' %}
BindReadOnlyPaths=/home/app/data
BindReadOnlyPaths=%h/data
{% endif %}
# In root namescope, %u always bring 0
BindPaths=/etc/kopia
@@ -32,9 +32,9 @@ BindPaths=/var/cache/kopia
EnvironmentFile=/etc/secrets/{{ kopia_uid }}/kopia.env
ExecStartPre=/usr/bin/kopia repository connect server \
--url=https://{{ infra_uri['kopia']['domain'] }}:{{ infra_uri['kopia']['ports']['https'] }} \
--url=https://{{ services['kopia']['domain'] }}.{{ domain['internal'] }}:{{ services['kopia']['ports']['https'] }} \
--override-username={{ node['name'] }} \
--override-hostname={{ node['name'] }}.ilnmors.internal
--override-hostname={{ node['name'] }}.{{ domain['internal'] }}
{% if node['name'] == 'infra' %}
ExecStart=/usr/bin/kopia snapshot create \
@@ -12,4 +12,4 @@ StandardError=journal
EnvironmentFile=/etc/secrets/%U/ddns.env
# Run the script
ExecStart=/usr/local/bin/ddns.sh -d "ilnmors.com"
ExecStart=/usr/local/bin/ddns.sh -d "{{ domain['public'] }}"
@@ -19,7 +19,7 @@
},
{
"name": "domain-name",
"data": "ilnmors.internal."
"data": "{{ domain['internal'] }}."
}
],
"reservations": [
@@ -65,7 +65,7 @@
},
{
"name": "domain-name",
"data": "ilnmors.internal."
"data": "{{ domain['internal'] }}."
}
],
"id": 2,
+33
View File
@@ -0,0 +1,33 @@
# Android application OIDC issue
## Status
- Processing
## Date
- 2026-04-20
## Version
- affine server: 0.26.3 (self-hosted)
- affine application: 0.26.3 (Android)
- IdP: Authelia:4.39.15
## Problem
- Affine android app cannot authenticate via OIDC
- IdP authentication succeeds, but the app does not establish a session
- The app remains on the "Sign In" screen
## Reason
- Affine uses callback deep link `affine://authentication`
- For self-hosted instances the deep link carries a 'server' parameter pointing to the correct origin, but android never read it.
- [Issue #12819: No SSO on Android](https://github.com/toeverything/AFFiNE/issues/12819)
- [PR #14809](https://github.com/toeverything/AFFiNE/pull/14809)
## Timeline
- 2025-06-14: Issue #12819
- 2026-04-08: PR #14809
- 2026-04-09: Canary branch merge
- 2026-04-15: Fork, cherry-pick
## Solution
- Wait for stable release which contains the merge above
- When the stable version releases, then verify after update
@@ -0,0 +1,33 @@
# Actual Budget crowdsec false positive issue
## Status
- Finished
## Date
- 2026-03-21
## Version
- Actual Budget: 26.3.0
## Problem
- When users access and log in actual budget, all connections to homelab services are refused.
- fw ban users' IP address.
## Reason
- Actual budget has local first policy.
- When the user log in actual budget, the client downloads all sql files from the server.
- LAPI decides that as an attack which sensitive file(sql) is downloaded concurrently.
## Timeline
- 2026-03-21: Release actual budget
- 2026-03-21: Find the false positive case, and add whitelist
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add regex on whitelist
- evt.Meta.target_fqdn == '{{ services['actualbudget']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/data/migrations/'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+32
View File
@@ -0,0 +1,32 @@
# Immich crowdsec false positive issue
## Status
- Finished
## Date
- 2026-03-21
## Version
- Immich: 2.6.1
## Problem
- When users access and log in Immich while Immich is generating thumbnail, all connections to homelab services are refused.
- fw ban users' IP address.
## Reason
- Immich sends 404 error to clients when the client request thumbnail while it is generating them.
- LAPI decides a ban when a lot of 404 errors occur in short time
## Timeline
- 2026-03-21: Release Immich
- 2026-03-21: Find the false positive case, and add whitelist
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add regex on whitelist
- evt.Meta.target_fqdn == 'Immich.ilnmors.com' && evt.Meta.http_path contains '/api/assets/' && evt.Meta.http_path contains '/thumbnail'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+32
View File
@@ -0,0 +1,32 @@
# OpenCloud crowdsec false positive issue
## Status
- Finished
## Date
- 2026-04-04
## Version
- OpenCloud: 4.0.4
## Problem
- When users download some files, all connections to homelab services are refused.
- fw ban users' IP address.
## Reason
- OpenCloud uses chunks when clients uploads or download files to it.
- LAPI decides a ban when a lot of chunks file is uploaded or downloaded from external devices
## Timeline
- 2026-04-04: Release OpenCloud
- 2026-04-04: Find the false positive case, and add whitelist
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add regex on whitelist
- evt.Meta.target_fqdn == '{{ services['opencloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/js/chunks/'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+38
View File
@@ -0,0 +1,38 @@
# Nextcloud crowdsec false positive issue
## Status
- Finished
## Date
- 2026-05-02
## Version
- Nextcloud: 33.0.3
## Problem
- When users download or modify some files, all connections to homelab services are refused.
- fw ban users' IP address.
## Reason
- Nextcloud uses chunks for actions, and uploading and downloading
- chunks on '/apps/viewer/js', '/dist/'
- `crowdsecurity/http-crawl-non_statics`
- Nextcloud keeps checking directory which is uploading
- upload directory '/remote.php/dav/files/'
- `crowdsecurity/http-probing`
## Timeline
- 2026-05-02: Release nextcloud
- 2026-05-02: Find the false positive case, and add whitelist
## Solution
- Access to fw
- Check the ban list with `sudo cscli alerts list`
- Read the ban case with `sudo cscli alerts inspect $NUMBER`
- Add expressions on whitelist
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/apps/viewer/js/'
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/dist/'
- evt.Meta.target_fqdn == '{{ services['nextcloud']['domain']['public'] }}.{{ domain['public'] }}' && evt.Meta.http_path contains '/remote.php/dav/files/'
- Delete false positive decision
- Check false positive decision with `sudo cscli decision list`
- Delete false positive decision with `sudo cscli decision delete --id $ID`
+3 -3
View File
@@ -90,7 +90,7 @@ Kea in fw already reserved DSM's IP. However it is necessary to set IP address s
## Authelia OIDC
- **!CAUTION!** It can be set after authelia is implemented
- Following [here](../../../config/containers/auth/authelia/config/authelia.yaml.j2) for Authelia configuration
- Following [here](../../config/services/containers/auth/authelia/config/authelia.yaml.j2) for Authelia configuration
- Control Panel:Domain/LDAP:SSO Client
- Login Settings: \[x\] Select SSO by default on the login page
- Services
@@ -192,9 +192,9 @@ BindPaths=/var/cache/kopia
EnvironmentFile=/etc/secrets/{{ kopia_uid }}/kopia.env
ExecStartPre=/usr/bin/kopia repository connect server \
--url=https://{{ infra_uri['kopia']['domain'] }}:{{ infra_uri['kopia']['ports']['https'] }} \
--url=https://{{ services['kopia']['domain'] }}.{{ domain['internal'] }}:{{ services['kopia']['ports']['https'] }} \
--override-username={{ node['name'] }} \
--override-hostname={{ node['name'] }}.ilnmors.internal
--override-hostname={{ node['name'] }}.{{ domain['internal'] }}
ExecStart=/usr/bin/kopia snapshot create \
/path/to/backup
+22 -7
View File
@@ -1,5 +1,26 @@
# Git configuration
## Convention
- `type(scope): subject`
- type:
- feat: Append the new feature
- fix: Fix the bug or errors
- docs: Fix the documentations
- refactor: Modify code structure without functional changes
- perf: Improve the performance
- chore: Modify system, package manager, etc configuration
- style: Fix code formatting, etc...
## Commit and tags
- In this homelab, `[Infra_structure_change]:[Services_change]:[Documents_and_configuration_change]` is the tagging rule.
- Tagging and commit should be distinguished.
- The change which affects system: tagging
- The change which doesn't affect system: commit
- `git commit -m "docs(git): define git convention"`
## Local git
```bash
@@ -29,14 +50,8 @@ git add .
# Check git changes
git status
git commit -m "1.0.0: Release IaaS baseline"
# git commit -m "docs: update 07-git.md to add the way to manage git system"
# Make current documents as snapshot
git tag -a 1.0.0 -m "IaaS baseline"
# Make special changes
# In this homelab, [Infra_structure_change]:[Services_change]:[Documents_and_configuration_change]
# Tagging and commit should be distinguished.
# The change which affects system: tagging
# The change which doesn't affect system: commit
# Commands
git status # What files are changed
+121
View File
@@ -0,0 +1,121 @@
# affine
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.affine`
- Access infra server to create affine_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER affine WITH PASSWORD 'postgresql.password.affine';
CREATE DATABASE affine_db;
ALTER DATABASE affine_db OWNER TO affine;
\connect affine_db
CREATE EXTENSION IF NOT EXISTS vector;
\dx
-- Check the extension is activated with `\dx`
-- postgresql image is built with `pgvector` and `vectorchord` already
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'affine.oidc.secret'`
- Save this value in secrets.yaml in `affine.oidc.secret` and `affine.oidc.hash`
### Create secret key value
- Create the secret with `openssl genpkey -algorithm ed25519 -outform PEM`
- Save this value in secrets.yaml in `affine.secret_key`
### Create admin password
- Create the secret with `openssl rand -base64 32`
- Save this value in secrets.yaml in `affine.il.password`
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "affine"
```
## Configuration
### About community edition limitation
- Workspace seats
- The number of members itself \(account\) are unlimited.
- However the number of members who work on the same workspace simultaneously \(seats\) are designated as 10 members.
- Workspace storage quota
- Originally, self-hosted version has no limitation in storage quota and uploading file size.
- Now, there is some limitation even in the self-hosted version.
- It will be changed when the application is updating
### Following feature which will be applied in this system
- Linking local caldav vaikal or radicale ...
- Apply AI function with API
### Access to affine
- https://affine.ilnmors.com
- Getting started
- admin name
- admin E-mail
- admin password
- Initial setting allows only 32 digit password, now just set temporary password
### Server configuration
- https://affine.ilnmors.com/admin
#### Server
- A recognizable name for the server. Will be shown when connected with AFFiNE Desktop.
- Ilnmors
#### Auth
- [ ] Whether allow new registrations
- [x] Whether allow new registration via configured oauth
- Minimum length requirement of password: 8
- Maximum length requirement of password: 50
- save
#### Oauth configuration
```ini
# These options are required
## OIDC callback URIs
Environment="AFFINE_SERVER_HOST={{ services['affine']['domain']['public'] }}.{{ domain['public'] }}"
Environment="AFFINE_SERVER_EXTERNAL_URL=https://{{ services['affine']['domain']['public'] }}.{{ domain['public'] }}"
Environment="AFFINE_SERVER_HTTPS=true"
```
- OIDC Oauth provider config
```json
{
"clientId":"affine",
"clientSecret":"affine.oidc.secret",
"issuer":"https://authelia.ilnmors.com",
"args":{
"scope": "openid profile email"
}
}
```
- save
#### Flags
- [x] Whether allow guest users to create demo workspaces
- save
+1 -1
View File
@@ -9,4 +9,4 @@ After reboot, check the render device.
```bash
ls -l /dev/dri
# crw-rw---- 1 root video 226, 0 ... card0
# crw-rw---- 1 root render 226, 128 ... renderD128
# crw-rw---- 1 root render 226, 128 ... renderD128
+88
View File
@@ -0,0 +1,88 @@
# Nextcloud
## Prerequisite
### Create database
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `postgresql.password.nextcloud`
- Access infra server to create nextcloud_db with `podman exec -it postgresql psql -U postgres`
```SQL
CREATE USER nextcloud WITH PASSWORD 'postgresql.password.nextcloud';
CREATE DATABASE nextcloud_db;
ALTER DATABASE nextcloud_db OWNER TO nextcloud;
```
### Create oidc secret and hash
- Create the secret with `openssl rand -base64 32`
- access to auth vm
- `podman exec -it authelia sh`
- `authelia crypto hash generate pbkdf2 --password 'nextcloud.oidc.secret'`
- Save this value in secrets.yaml in `nextcloud.oidc.secret` and `nextcloud.oidc.hash`
### Create admin password
- Create the secret with `openssl rand -base64 32`
- Save this value in secrets.yaml in `nextcloud.admin-local.password`
### Add postgresql dump backup list
- [set_postgresql.yaml](../../../ansible/roles/infra/tasks/services/set_postgresql.yaml)
```yaml
- name: Set connected services list
ansible.builtin.set_fact:
connected_services:
- ...
- "nextcloud"
```
## Configuration
### Access
- https://nextcloud.ilnmors.com
- login with admin-local
### Disable and enable apps
- Profile: Apps: Your apps: Disable
- Photo
- dashboard
- Profile: Apps: Search
- OpenID Connect user backend
- Calendar
- Contacts
- Deck
- Tasks
- Mail
- Nextcloud Office
### Configuration
```bash
podman exec -u www-data nextcloud php occ user_oidc:provider Authelia \
--clientid="nextcloud" \
--clientsecret="nextcloud.oidc.secret" \
--discoveryuri="https://authelia.ilnmors.com/.well-known/openid-configuration" \
--scope="openid profile email groups" \
--unique-uid=0 \
--mapping-uid="preferred_username" \
--mapping-display-name="name" \
--mapping-email="email" \
--mapping-groups="groups" \
--group-whitelist-regex="/^users$/" \
--group-provisioning=1
podman exec -u www-data nextcloud php occ db:add-missing-indices
podman exec -u www-data nextcloud php occ db:add-missing-columns
podman exec -u www-data nextcloud php occ db:add-missing-primary-keys
```
### Account configuration
- Profile: Accounts:
- allocate admin group for admin users
+25
View File
@@ -0,0 +1,25 @@
# opencloud
## Prerequisite
### oidc secret and hash
- Opencloud uses PKEC, therefore it doesn't need client secret
### Create admin password
- Create the password with `openssl rand -base64 32`
- Save this value in secrets.yaml in `opencloud.admin.password`
## Configuration
- **!CAUTION!** OpenCloud application \(Android, IOS, Desktop\) doesn't support standard OIDC. Every scopes and client id is hardcoded.
- WEBFINGER_\[DESKTOP|ANDROID|IOS\]_OIDC_CLIENT_ID, WEBFINGER_\[DESKTOP|ANDROID|IOS\]_OIDC_CLIENT_SCOPES don't work on official app.
- It is impossible to set group claim in scopes. Therefore, it is hard to control roles with token including group claim.
- When authelia doesn't work, annotate `OC_EXCLUDE_RUN_SERVICES=idp` and restart to container to use local admin.
- This app doesn't support regex on role_assignment mapping.
- When the new user added, manage proxy.yaml.j2 manually until they will support regex or fallback mapping, or fix the hardcoded scopes on applications.
### csp
- Fix `csp.yaml`
+10
View File
@@ -60,6 +60,16 @@ ALTER DATABASE paperless_db OWNER TO paperless;
- Continue
- Login with Authelia
### OCR configuration
- Configuration: OCR settings
- Output Type: pdfa
- Mode: skip
- When the archive file has broken ocr text, then conduct replcae command manually
- Skip archive File: never
- Deskew: disable \(toggle to enable and once more to active disable option\)
- rotate: disable \(toggle to enable and once more to active disable option\)
## The non-standard pdf file
- Some pdf files doesn't follow the standard, for example korean court or government pdf files.

Some files were not shown because too many files have changed in this diff Show More