1.0.0 Release IaaS
This commit is contained in:
149
docs/archives/2025-12/08_development/08_01_dev_vm.md
Normal file
149
docs/archives/2025-12/08_development/08_01_dev_vm.md
Normal file
@@ -0,0 +1,149 @@
|
||||
Tags: #os, #configuration, #development, #virtualization
|
||||
|
||||
## Preparation
|
||||
|
||||
### Set DHCP reservation and DNS record
|
||||
|
||||
#### Set DHCP reservation on KEA DHCP in OPNsense
|
||||
|
||||
Following [here](05_07_opnsense_kea.md)
|
||||
|
||||
- Services:Kea DHCP:Kea DHCPv4:Reservations - \[+\]
|
||||
- Subnet: 192.168.10.0/24
|
||||
- IP address: 192.168.10.13
|
||||
- MAC address: 0A:49:6E:4D:03:00
|
||||
- Hostname: dev
|
||||
- Description: dev
|
||||
- `save`
|
||||
|
||||
#### Set DNS records in BIND
|
||||
|
||||
Following [here](../06_network/06_03_net_bind.md).
|
||||
|
||||
- net server
|
||||
- file:
|
||||
- ~/data/containers/bind/lib/db.ilnmors.internal
|
||||
- ~/data/containers/bind/lib/db.10.168.192.in-addr.arpa
|
||||
|
||||
```ini
|
||||
# db.ilnmors.internal
|
||||
# ...
|
||||
dev IN A 192.168.10.13
|
||||
# ...
|
||||
# db.10.168.192.in-addr.arpa
|
||||
# ...
|
||||
13 IN PTR dev.ilnmors.internal.
|
||||
# ...
|
||||
```
|
||||
|
||||
```bash
|
||||
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
|
||||
systemctl --user restart bind
|
||||
```
|
||||
|
||||
### Create VM template
|
||||
|
||||
- ~/data/config/scripts/dev.sh
|
||||
|
||||
```bash
|
||||
virt-install \
|
||||
--boot uefi \
|
||||
--name dev \
|
||||
--os-variant debian13 \
|
||||
--vcpus 2 \
|
||||
--memory 6144 \
|
||||
--location /var/lib/libvirt/images/debian-13.0.0-amd64-netinst.iso \ # For serial installing, use `--location` instead of `--cdrom`
|
||||
--disk pool=vm-images,size=258,format=qcow2,discard=unmap \
|
||||
--network network=ovs-lan-net,portgroup=vlan10-access,model=virtio,mac=0A:49:6E:4D:03:00 \ # Use designated ovs port group
|
||||
--graphics none \
|
||||
--console pty,target_type=serial \
|
||||
--extra-args "console=ttyS0,115200"
|
||||
# After enter this command, then the console start automatically
|
||||
# Remove all annotation before you make the sh file.
|
||||
```
|
||||
|
||||
### Debian installing
|
||||
|
||||
- Following [here](../03_common/03_01_debian_configuration.md) to install Debian.
|
||||
- Debian installer supports serial mode regardless getty@ttyS0 service is enabled or not.
|
||||
- Following [here](../03_common/03_02_iptables.md) to set iptables.
|
||||
- Following [here](../03_common/03_04_crowdsec.md) to set CrowdSec
|
||||
|
||||
#### Serial console setting
|
||||
|
||||
After installation, use `ctrl + ]` to exit console. Before setting getty@ttyS0, you can't use serial console to access VM. Therefore, use IP address set on installation, and connect net server via ssh first, following the step to enable the getty.
|
||||
|
||||
### Modify VM template settings
|
||||
|
||||
After getty setting, shutdown dev vm with `shutdown` in VM or `sudo virsh shutdown dev` in hypervisor to turn off vm first.
|
||||
|
||||
```bash
|
||||
virsh edit dev
|
||||
```
|
||||
|
||||
```xml
|
||||
<!-- dev -->
|
||||
...
|
||||
</vcpu>
|
||||
<cputune>
|
||||
<shares>1024</shares>
|
||||
</cputune>
|
||||
<!-- cpu priority - 1024: default/2048: high/512: low -->
|
||||
|
||||
<!--
|
||||
<disk type='file' device='cdrom'>
|
||||
...
|
||||
</disk>
|
||||
# Remove booting disk
|
||||
-->
|
||||
```
|
||||
|
||||
```bash
|
||||
virsh dumpxml dev > ~/data/config/vms/dumps/dev.xml
|
||||
virsh start dev && virsh console dev
|
||||
# Start dev server with console
|
||||
```
|
||||
|
||||
### Common setting
|
||||
|
||||
- dev.service
|
||||
|
||||
```ini
|
||||
# ~/data/config/services/dev.service
|
||||
# ~/.config/systemd/user/dev.service
|
||||
[Unit]
|
||||
Description=dev Auto Booting
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
Requires=opnsense.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
|
||||
# Maintain status as active
|
||||
RemainAfterExit=yes
|
||||
|
||||
# CrowdSec should be set
|
||||
ExecStartPre=%h/data/config/scripts/wait-for-it.sh 192.168.10.1:8080 -t 0
|
||||
ExecStartPre=%h/data/config/scripts/wait-for-it.sh 192.168.10.11:53 -t 0
|
||||
ExecStartPre=%h/data/config/scripts/wait-for-it.sh 192.168.10.12:9000 -t 0
|
||||
|
||||
ExecStartPre=/bin/bash -c "sleep 15"
|
||||
|
||||
# Run the service
|
||||
ExecStart=/usr/bin/virsh -c qemu:///system start dev
|
||||
|
||||
# Stop the service
|
||||
ExecStop=/usr/bin/virsh -c qemu:///system shutdown dev
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
```bash
|
||||
ln -s ~/data/config/services/dev.service ~/.config/systemd/user/dev.service
|
||||
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user enable dev.service
|
||||
systemctl --user start dev.service
|
||||
```
|
||||
534
docs/archives/2025-12/08_development/08_02_dev_postgresql.md
Normal file
534
docs/archives/2025-12/08_development/08_02_dev_postgresql.md
Normal file
@@ -0,0 +1,534 @@
|
||||
Tags: #os, #configuration, #virtualization, #container, #database
|
||||
|
||||
## postgresql
|
||||
|
||||
Postgresql is one of most famous open source RDBMS (Relational Database Management System). RDBMS saves the data as table which is based on row and coloumn. This uses SQL (Structure Quarey Language) to manage a lot of data and guarantee integrity of data.
|
||||
|
||||
### Secret management
|
||||
|
||||
All transaction of postgresql which needs root permission will be conducted on local container or podman exec (without other Pod) in this porject. However, the environment value `POSTGRES_PASSWORD` should be set when the database is initiated. It makes postgresql programs root permission access way as trust which means only local access is allowed. This infromation will be set on `pg_hba.conf` like below.
|
||||
|
||||
- `local all postgres trust`
|
||||
- `hostssl all all 192.168.10.x/32 scram-sha-256` (The server that uses postgresql)
|
||||
- `host all all 127.0.0.1/32 scram-sha-256`
|
||||
|
||||
#### Secret
|
||||
|
||||
- File:
|
||||
- ~/data/config/secrets/.secret.yaml
|
||||
|
||||
- Edit `.secret.yaml` with `edit_secret.sh`
|
||||
|
||||
|
||||
```yaml
|
||||
# ~/data/config/secrets/.secret.yaml
|
||||
# postgresql
|
||||
POSTGRES_PASSWORD: secret
|
||||
```
|
||||
|
||||
### Preparation
|
||||
|
||||
#### iptables and firewall rules
|
||||
|
||||
- Set iptables first, following [here](../03_common/03_02_iptables.md).
|
||||
- 5432: auth, app
|
||||
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
|
||||
|
||||
#### Create directory for container
|
||||
|
||||
```bash
|
||||
mkdir -p ~/data/containers/postgresql
|
||||
chmod 700 ~/data/containers/postgresql
|
||||
setfacl -m d:g::0 ~/data/containers/postgresql
|
||||
setfacl -m d:o::0 ~/data/containers/postgresql
|
||||
setfacl -m u:dev:rwx ~/data/containers/postgresql
|
||||
setfacl -m u:100998:rwx ~/data/containers/postgresql
|
||||
setfacl -d -m u:dev:rwx ~/data/containers/postgresql
|
||||
setfacl -d -m u:100998:rwx ~/data/containers/postgresql
|
||||
mkdir ~/data/containers/postgresql/{backups,config,certs,data,initdb}
|
||||
|
||||
# After generating
|
||||
c
|
||||
sudo find ~/data/containers/postgresql -type d -exec setfacl -m m::rwx {} \;
|
||||
```
|
||||
|
||||
> postgresql container executes as 999:999(postgres:postgres) permission in container. It is mapped host's 100998. Therefore, directories have to have ACL via `setfacl`
|
||||
|
||||
#### Add new domain in BIND
|
||||
|
||||
Following [here](../06_network/06_03_net_bind.md).
|
||||
|
||||
- net server
|
||||
- file: ~/data/containers/bind/lib/db.ilnmors.internal
|
||||
|
||||
```ini
|
||||
# ...
|
||||
postgresql IN CNAME dev.ilnmors.internal.
|
||||
# ...
|
||||
```
|
||||
|
||||
```bash
|
||||
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
|
||||
systemctl --user restart bind
|
||||
```
|
||||
|
||||
### Podman Image
|
||||
|
||||
```bash
|
||||
podman pull postgres:18.0 # Do not use latest version to management
|
||||
```
|
||||
|
||||
### Config file
|
||||
|
||||
- File:
|
||||
- ~/data/containers/postgresql/config/postgresql.conf
|
||||
- ~/data/containers/postgresql/config/pg_hba.conf
|
||||
|
||||
|
||||
```bash
|
||||
# extracting config file
|
||||
podman run --rm postgres:18.0 cat /usr/share/postgresql/postgresql.conf.sample > ~/data/containers/postgresql/config/postgresql.conf
|
||||
|
||||
podman run --rm postgres:18.0 cat /usr/share/postgresql/18/pg_hba.conf.sample > ~/data/containers/postgresql/config/pg_hba.conf
|
||||
|
||||
# Generate the password
|
||||
openssl rand -base64 32
|
||||
|
||||
# Podman secret
|
||||
extract_secret.sh ~/data/config/secrets/.secret.yaml -f "POSTGRES_PASSWORD" | podman secret create "POSTGRES_PASSWORD" -
|
||||
```
|
||||
|
||||
> If there were the schema backup file, then go to the `Restore` section.
|
||||
|
||||
```ini
|
||||
# ~/data/containers/postgresql/config/postgresql.conf
|
||||
|
||||
# Add settings for extensions here
|
||||
|
||||
# hba_file directory
|
||||
hba_file = '/config/pg_hba.conf'
|
||||
|
||||
# Listen_address
|
||||
listen_addresses = '*'
|
||||
|
||||
# listen_port
|
||||
port = 5432
|
||||
|
||||
# SSL
|
||||
ssl = on
|
||||
ssl_ca_file = '/etc/ssl/postgresql/root_ca.crt'
|
||||
ssl_cert_file = '/etc/ssl/postgresql/postgresql.ilnmors.internal/fullchain.pem'
|
||||
ssl_key_file = '/etc/ssl/postgresql/postgresql.ilnmors.internal/key.pem'
|
||||
ssl_ciphers = 'HIGH:!aNULL:!MD5'
|
||||
ssl_prefer_server_ciphers = on
|
||||
|
||||
# log
|
||||
log_destination = 'stderr'
|
||||
log_checkpoints = on
|
||||
log_temp_files = 0
|
||||
log_min_duration_statement = 500
|
||||
|
||||
```
|
||||
|
||||
```ini
|
||||
# ~/data/containers/postgresql/config/pg_hba.conf
|
||||
|
||||
# Local host `trust`
|
||||
local all all trust
|
||||
|
||||
# Local connection (container) needs password (127.0.0.1 - container loopback)
|
||||
host all all 127.0.0.1/32 scram-sha-256
|
||||
# Local connection (dev) needs password (169.254.1.2 - host-gateway)
|
||||
# Maybe Grafana(SQLite), Uptime kuma(SQLite), Loki(BoltDB), Dovecot(LDAP or />
|
||||
#hostssl all all 169.254.1.2/32 scram-sha-256
|
||||
|
||||
# auth VM (Authentik, 192.168.10.12)
|
||||
hostssl all all 192.168.10.12/32 scram-sha-256
|
||||
|
||||
# app VM (Applications, 192.168.10.14)
|
||||
hostssl all all 192.168.10.14/32 scram-sha-256
|
||||
|
||||
# explicit deny
|
||||
host all all 192.168.10.0/24 reject
|
||||
```
|
||||
|
||||
#### Initiating (Restoring)
|
||||
|
||||
- File:
|
||||
- ~/data/config/scripts/postgresql/postgresql_init.sh
|
||||
- ~/data/containers/postgresql/backups/postgresql-cluster_$(date "+%Y-%m-%d").sql
|
||||
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# ~/data/config/scripts/postgresql/postgresql_init.sh [FILE_PATH]
|
||||
set -e
|
||||
DATA_PATH="$HOME/data/containers/postgresql/data"
|
||||
FILE_PATH="$1"
|
||||
VERSION=18
|
||||
FLAG=""
|
||||
|
||||
# Check the PostgreSQL service
|
||||
|
||||
if [ $(systemctl --user is-active postgresql) == "active" ]; then
|
||||
echo "PostgreSQL should be terminated"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check the podman secret
|
||||
|
||||
if [ -z "$(podman secret list | grep "POSTGRES_PASSWORD")" ]; then
|
||||
echo "POSTGRES_PASSWORD has to be in podman secret"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check the data path
|
||||
|
||||
if [ -n "$(ls -A $DATA_PATH)" ]; then
|
||||
echo "$DATA_PATH should be empty"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check the sql file
|
||||
|
||||
if [ -z "$FILE_PATH" ]; then
|
||||
echo "Initiating PostgreSQL"
|
||||
FLAG="FALSE"
|
||||
else
|
||||
if [ ! -f "$FILE_PATH" -o ! -s "$FILE_PATH" -o -z "$(echo $FILE_PATH | grep "\.sql$")" ]; then
|
||||
echo "Availavble .sql format file is needed"
|
||||
exit 1
|
||||
fi
|
||||
echo "Restoring PostgreSQL"
|
||||
FLAG="TRUE"
|
||||
fi
|
||||
|
||||
|
||||
|
||||
# Initiating
|
||||
podman run --rm \
|
||||
--secret POSTGRES_PASSWORD,type=env \
|
||||
-e TZ="Asia/Seoul" \
|
||||
-v "$DATA_PATH":/var/lib/postgresql:rw \
|
||||
postgres:18.0 \
|
||||
-C "port" || true
|
||||
|
||||
# postgresql start
|
||||
echo "Start postgresql service"
|
||||
systemctl --user start postgresql
|
||||
|
||||
# Restoring
|
||||
if [ "$FLAG" == "TRUE" ]; then
|
||||
while [ -z "$(systemctl --user status postgresql | grep "database system is ready to accept connections")" ]; do
|
||||
sleep 1
|
||||
done
|
||||
echo "Start restoring PostgreSQL"
|
||||
cat "$FILE_PATH" | podman exec -i -u postgres postgresql psql -U postgres
|
||||
echo "Finish restoring PostgreSQL"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
```
|
||||
#### Certificates configuration
|
||||
|
||||
- File: ~/data/containers/postgresql/certs/root_ca.crt
|
||||
|
||||
- ACME setting (OPNsense)
|
||||
- Services:ACME Client:Certificates - Certificates - \[+\]
|
||||
- Common Name: postgresql.ilnmors.internal
|
||||
- Description: postgresql
|
||||
- ACME Account: acme.ilnmors.internal
|
||||
> Even though provisioner's name includes `@`, it has to use as `.`.
|
||||
>
|
||||
> i.e. `acme@ilnmors.internal` > `acme.ilnmors.internal`
|
||||
- Challenge Type: ilnmors.internal-dns-01-challenge
|
||||
- \[\*\] Auto Renewal
|
||||
- Automations: postgresql-auto-acme, postgresql-auto-restart
|
||||
|
||||
- Automations (OPNsense)
|
||||
- Services:ACME Client:Automations - Automation - \[+\]
|
||||
- Name: postgresql-auto-acme / postgresql-auto-reload
|
||||
- Description: postgresql acme crt issue / reload postgresql after crt is issued
|
||||
- Run Command: Upload certificate via SFTP / Remote command via SSH
|
||||
- SFTP Host: postgresql.ilnmors.internal
|
||||
- Username: dev
|
||||
- Identity Type: ed25519
|
||||
- Remote Path(SFTP): /home/dev/data/containers/postgresql/certs
|
||||
- Command(SSH): setfacl -m m::r /home/dev/data/containers/postgresql/certs/postgresql.ilnmors.internal/* && podman exec -u postgres postgresql pg_ctl reload
|
||||
- `Show Identity`
|
||||
> Copy Required parameters `ssh-ed25519 ~~~ root@opnsense.ilnmors.internal`
|
||||
>
|
||||
> Add parameters in net server's ~/.ssh/authorized_keys
|
||||
- `Test Connect` and `Save`
|
||||
|
||||
- SSH command will be applied after postgresql start.
|
||||
|
||||
#### Quadlet
|
||||
|
||||
- File:
|
||||
- ~/data/config/containers/postgresql/postgresql.container
|
||||
|
||||
```ini
|
||||
# ~/data/config/containers/postgresql/postgresql.container
|
||||
[Quadlet]
|
||||
DefaultDependencies=false
|
||||
|
||||
[Unit]
|
||||
Description=PostgreSQL
|
||||
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Container]
|
||||
Image=postgres:18.0
|
||||
|
||||
ContainerName=postgresql
|
||||
|
||||
PublishPort=5432:5432/tcp
|
||||
|
||||
Volume=%h/data/containers/postgresql/data:/var/lib/postgresql:rw
|
||||
Volume=%h/data/containers/postgresql/config:/config:ro
|
||||
Volume=%h/data/containers/postgresql/backups:/backups:rw
|
||||
Volume=%h/data/containers/postgresql/certs:/etc/ssl/postgresql:ro
|
||||
|
||||
Environment="TZ=Asia/Seoul"
|
||||
|
||||
Exec=postgres -c 'config_file=/config/postgresql.conf'
|
||||
|
||||
Label=diun.enable=true
|
||||
Label=diun.watch_repo=true
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
#### Create systemd `.service` file
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.config/containers/systemd
|
||||
chmod -R 700 ~/.config/containers/systemd
|
||||
|
||||
ln -s ~/data/config/containers/postgresql/postgresql.container ~/.config/containers/systemd/postgresql.container
|
||||
|
||||
systemctl --user daemon-reload
|
||||
```
|
||||
|
||||
#### Enable and start service
|
||||
|
||||
```bash
|
||||
# Before start service, setfacl for certificates (before register automation)
|
||||
setfacl -m m::r /home/dev/data/containers/postgresql/certs/postgresql.ilnmors.internal/*
|
||||
|
||||
systemctl --user start postgresql.service
|
||||
```
|
||||
|
||||
### Create user
|
||||
|
||||
- Login to psql in the container
|
||||
|
||||
```bash
|
||||
podman exec -it -u postgres postgresql psql -U postgres
|
||||
> # Create user and database
|
||||
> CREATE USER $USER WITH PASSWORD 'password';
|
||||
> CREATE DATABASE $DB;
|
||||
> ALTER DATABASE $DB OWNER TO $USER;
|
||||
> \du
|
||||
> \l
|
||||
|
||||
# Whenever you modify the schema including user or database structure, conduct postgresql-cluster-backup.service
|
||||
systemctl --user start postgresql-cluster-backup.service
|
||||
|
||||
|
||||
> # If you want to change the password
|
||||
> ALTER USER $USER WITH PASSWORD 'password';
|
||||
> # After this, update the .secret.yaml file and podman secret
|
||||
```
|
||||
|
||||
### Backup
|
||||
|
||||
#### Backup service
|
||||
|
||||
- File:
|
||||
- ~/data/config/services/postgresql/postgresql-cluster-backup.service
|
||||
- ~/data/config/services/postgresql/postgresql-cluster-backup.timer
|
||||
- ~/data/config/services/postgresql/postgresql-data-backup@.service
|
||||
- ~/data/config/services/postgresql/postgresql-data-backup@.timer
|
||||
|
||||
#### Cluster
|
||||
|
||||
```ini
|
||||
# ~/data/config/services/postgresql/postgresql-cluster-backup.service
|
||||
# ~/.config/systemd/user/postgresql-cluster-backup.service
|
||||
[Unit]
|
||||
Description=PostgreSQL Cluster Backup Service
|
||||
After=postgresql.service
|
||||
BindsTo=postgresql.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
|
||||
# logging
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
ExecStartPre=podman exec -u postgres postgresql sh -c "mkdir -p /backups/cluster"
|
||||
|
||||
# Run the script
|
||||
ExecStart=podman exec -u postgres postgresql sh -c "pg_dumpall -U postgres --schema-only | grep -v -E \"ROLE postgres\" > /backups/cluster/postgresql-cluster_$(date "+%%Y-%%m-%%d").sql"
|
||||
|
||||
ExecStop=podman exec -u postgres postgresql sh -c "find /backups/cluster -maxdepth 1 -type f -mtime +7 -delete"
|
||||
```
|
||||
|
||||
```ini
|
||||
# ~/data/config/services/postgresql/postgresql-cluster-backup.timer
|
||||
# ~/.config/systemd/user/postgresql-cluster-backup.timer
|
||||
[Unit]
|
||||
Description=Run PostgreSQL Cluster Backup service every day
|
||||
|
||||
[Timer]
|
||||
# Execute service after 1 min on booting
|
||||
OnBootSec=1min
|
||||
|
||||
# Execute service every day 00:00
|
||||
OnCalendar=*-*-* 00:00:00
|
||||
# Random time to postpone the timer
|
||||
RandomizedDelaySec=15min
|
||||
Persistent=true
|
||||
|
||||
# When timer is activated, Service also starts.
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
##### Restore cluster
|
||||
|
||||
- File:
|
||||
- ~/data/config/scripts/postgresql/postgresql_init.sh
|
||||
- ~/data/containers/postgresql/backups/cluster/postgresql-cluster_$(date "+%Y-%m-%d").sql
|
||||
|
||||
- Use `postgresql_init.sh postgresql-cluster_$(date "+%Y-%m-%d").sql` command
|
||||
|
||||
#### Data for each app
|
||||
|
||||
```ini
|
||||
# ~/data/config/services/postgresql/postgresql-data-backup@.service
|
||||
# ~/.config/systemd/user/postgresql-data-backup@.service
|
||||
[Unit]
|
||||
Description=PostgreSQL Data %i Backup Service
|
||||
After=postgresql.service
|
||||
BindsTo=postgresql.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
|
||||
# logging
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
ExecStartPre=podman exec -u postgres postgresql sh -c "mkdir -p /backups/%i"
|
||||
# Run the script
|
||||
ExecStart=podman exec -u postgres postgresql sh -c "pg_dump -U postgres -d %i_db --data-only > /backups/%i/postgresql-%i-data_$(date "+%%Y-%%m-%%d").sql"
|
||||
|
||||
ExecStop=podman exec -u postgres postgresql sh -c "find "/backups/%i" -maxdepth 1 -type f -mtime +7 -delete"
|
||||
```
|
||||
|
||||
```ini
|
||||
# ~/data/config/services/postgresql/postgresql-data-backup@.timer
|
||||
# ~/data/config/services/postgresql/postgresql-data-backup@.timer
|
||||
[Unit]
|
||||
Description=Run %i Data Backup service every day
|
||||
|
||||
[Timer]
|
||||
# Execute service after 1 min on booting
|
||||
OnBootSec=1min
|
||||
|
||||
# Execute service every day 00:00
|
||||
OnCalendar=*-*-* 00:00:00
|
||||
# Random time to postpone the timer
|
||||
RandomizedDelaySec=15min
|
||||
Persistent=true
|
||||
|
||||
# When timer is activated, Service also starts.
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
##### Data restore
|
||||
|
||||
- File: postgresql-app-data_$(date "+%Y-%m-%d").sql
|
||||
|
||||
```bash
|
||||
# The Schema must be needed. (cluster, DB and user)
|
||||
|
||||
# DB owner's session terminate
|
||||
# server where app is located
|
||||
systemctl --user stop app.service
|
||||
# Check session
|
||||
# dev server
|
||||
# Print the all session
|
||||
podman exec -u postgres postgresql psql -U postgres -c "SELECT * from pg_stat_activity;"
|
||||
> $TARGET_PID
|
||||
# exit session
|
||||
podman exec -u postgres postgresql psql -U postgres -c "SELECT pg_terminate_backend($TARGET_PID);"
|
||||
|
||||
# Using psql
|
||||
cat postgresql-app-data_$(date "+%Y-%m-%d").sql | podman exec -i -u postgres postgresql psql -U app
|
||||
```
|
||||
|
||||
#### Register service
|
||||
|
||||
```bash
|
||||
# Register service
|
||||
mkdir -p ~/.config/systemd/user && chmod 700 ~/.config/systemd/user
|
||||
|
||||
ln -s ~/data/config/services/postgresql/postgresql-cluster-backup.service ~/.config/systemd/user/postgresql-cluster-backup.service
|
||||
|
||||
ln -s ~/data/config/services/postgresql/postgresql-cluster-backup.timer ~/.config/systemd/user/postgresql-cluster-backup.timer
|
||||
|
||||
ln -s ~/data/config/services/postgresql/postgresql-data-backup\@.service ~/.config/systemd/user/postgresql-data-backup\@.service
|
||||
|
||||
ln -s ~/data/config/services/postgresql/postgresql-data-backup\@.timer ~/.config/systemd/user/postgresql-data-backup\@.timer
|
||||
|
||||
systemctl --user daemon-reload
|
||||
|
||||
# Start timer and enable
|
||||
systemctl --user enable --now postgresql-cluster-backup.timer
|
||||
systemctl --user enable --now postgresql-data-backup@app.timer
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
```bash
|
||||
# Init database
|
||||
postgresql_init.sh
|
||||
# ... Start postgresql service
|
||||
|
||||
# Create user and database
|
||||
podman exec -it -u postgres postgresql psql -U postgres
|
||||
> CREATE USER test WITH PASSWORD 'abc';
|
||||
> CREATE DATABASE test_db;
|
||||
> ALTER DATABASE test_db OWNER TO test;
|
||||
> \du
|
||||
> \l
|
||||
> \q
|
||||
|
||||
# Backup service executes
|
||||
systemctl --user start postgresql-cluster-backup.service
|
||||
|
||||
# Stop and remove all data
|
||||
systemctl --stop postgresql
|
||||
sudo find "/home/dev/data/containers/postgresql/data" -mindepth 1 -delete
|
||||
|
||||
# Restore database
|
||||
postgresql_init.sh ~/data/containers/backups/filename.sql
|
||||
|
||||
# Check restoring
|
||||
podman exec -it -u postgres postgresql psql -U postgres
|
||||
> \du
|
||||
> \l
|
||||
```
|
||||
|
||||
259
docs/archives/2025-12/08_development/08_03_dev_sidecar_caddy.md
Normal file
259
docs/archives/2025-12/08_development/08_03_dev_sidecar_caddy.md
Normal file
@@ -0,0 +1,259 @@
|
||||
Tags: #os, #configuration, #network, #virtualization, #container, #security
|
||||
|
||||
## Caddy - dev
|
||||
|
||||
Caddy is an open source reverse proxy (web server) which supports automatically to apply TLS certificates via ACME protocol from CA. This caddy will work as sidecar caddy, so it only uses private TLS. It can only communication with auth's main caddy. This means it doesn't need any module except RFC2136 module. Because main caddy will conduct WAF function at all.
|
||||
|
||||
### Secret management
|
||||
|
||||
- File:
|
||||
- ~/data/config/secrets/.secret.yaml
|
||||
|
||||
- Edit `.secret.yaml` with `edit_secret.sh`
|
||||
|
||||
```yaml
|
||||
# ~/data/config/secrets/.secret.yaml
|
||||
# CADDY:
|
||||
CADDY_ACME_KEY: acme-key_key_value (Only secret value)
|
||||
```
|
||||
|
||||
```bash
|
||||
# Podman secret
|
||||
extract_secret.sh .secret.yaml -f CADDY_ACME_KEY | podman secret create CADDY_ACME_KEY -
|
||||
```
|
||||
|
||||
### Preparation
|
||||
|
||||
#### iptables and firewall rules
|
||||
|
||||
- Set iptables first, following [here](../03_common/03_02_iptables.md).
|
||||
- Limit access client as Caddy - auth ( -s 192.168.10.12/32 )
|
||||
- 443 > 2443 (iptables setting)
|
||||
- Set firewall rules first, following [here](Latest/05_firewall/05_04_opnsense_rules.md).
|
||||
#### Create directory for container
|
||||
|
||||
```bash
|
||||
mkdir -p ~/data/containers/caddy-dev/{etc,data}
|
||||
chmod -R 700 ~/data/containers/caddy-dev
|
||||
```
|
||||
|
||||
> Caddy container executes as 0:0(root:root) permission in container. It is mapped host's UID. Therefore, directories don't have to have ACL via `setfacl`
|
||||
|
||||
#### Add new domain in BIND
|
||||
|
||||
Following [here](../06_network/06_03_net_bind.md).
|
||||
|
||||
- net server
|
||||
- file: ~/data/containers/bind/lib/db.ilnmors.internal
|
||||
|
||||
```ini
|
||||
# ...
|
||||
dev IN A 192.168.10.13
|
||||
# ...
|
||||
```
|
||||
|
||||
```bash
|
||||
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
|
||||
systemctl --user restart bind
|
||||
```
|
||||
|
||||
### Podman Image
|
||||
|
||||
#### Podman containerfile
|
||||
|
||||
Caddy supports various module for it. rfc2136(nsupdate) module, Sidecar caddy only recieves the request from the main caddy. Therefore it doesn't need anymore.
|
||||
|
||||
- file:
|
||||
- ~/data/config/containers/caddy-dev/containerfile-caddy-2.10.2-dev
|
||||
- ~/data/config/containers/caddy-dev/root_ca.crt
|
||||
|
||||
```containerfile
|
||||
FROM caddy:2.10.2-builder-alpine AS builder
|
||||
|
||||
RUN xcaddy build \
|
||||
--with github.com/caddy-dns/rfc2136
|
||||
|
||||
FROM caddy:2.10.2
|
||||
|
||||
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
|
||||
|
||||
COPY ./root_ca.crt /usr/local/share/ca-certificates/root_ca.crt
|
||||
|
||||
RUN update-ca-certificates
|
||||
```
|
||||
|
||||
#### Podman image build
|
||||
|
||||
```bash
|
||||
podman build -t caddy:2.10.2-dev -f ~/data/config/containers/caddy-dev/containerfile-caddy-2.10.2-dev . && podman image prune -f
|
||||
# Delete pure caddy and caddy-builder-alpine images after command above manually.
|
||||
```
|
||||
|
||||
### Configuration files
|
||||
|
||||
Caddyfile will be updated after Authelia setting
|
||||
|
||||
```bash
|
||||
# fix inconsistencies
|
||||
podman exec caddy-dev caddy fmt --overwrite /etc/caddy/Caddyfile
|
||||
# After Caddyfile setting is changed use this command.
|
||||
podman exec caddy-dev caddy reload --config /etc/caddy/Caddyfile
|
||||
```
|
||||
|
||||
- file:
|
||||
- ~/data/containers/caddy-auth/etc/Caddyfile
|
||||
- ~/data/containers/authelia/config/configuration.yml
|
||||
- ~/data/containers/caddy-dev/etc/Caddyfile
|
||||
|
||||
```ini
|
||||
# Caddyfile
|
||||
# ~/data/containers/caddy-auth/etc/Caddyfile
|
||||
|
||||
# Forward Auth for other vms
|
||||
(apply_forward_auth) {
|
||||
forward_auth host.containers.internal:9091 {
|
||||
uri /api/authz/forward-auth
|
||||
copy_headers Remote-User Remote-Groups Remote-Email R>
|
||||
}
|
||||
reverse_proxy {args[0]} {
|
||||
header_up Host {http.reverse_proxy.upstream.host}
|
||||
# X-Forwarded-Host header contains original Host value
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# ...
|
||||
dev-test.ilnmors.com {
|
||||
import crowdsec_log
|
||||
route {
|
||||
crowdsec
|
||||
import apply_forward_auth https://dev.ilnmors.internal
|
||||
}
|
||||
}
|
||||
# ...
|
||||
```
|
||||
|
||||
```ini
|
||||
# Caddyfile
|
||||
# ~/data/containers/caddy-dev/etc/Caddyfile
|
||||
{
|
||||
server {
|
||||
trusted_proxies static 192.168.10.12/32
|
||||
trusted_proxies_strict
|
||||
# To find the real client's IP.
|
||||
# default = false : left to right, when the Caddy met the untrusted IP first, then it would treat as client IP
|
||||
# true : right to left > It is easy to find currupted client IP
|
||||
}
|
||||
}
|
||||
|
||||
(private_tls) {
|
||||
tls {
|
||||
issuer acme {
|
||||
dir https://step-ca.ilnmors.internal:9000/acme/acme@ilnmors.internal/directory
|
||||
dns rfc2136 {
|
||||
server bind.ilnmors.internal:2253
|
||||
key_name acme-key
|
||||
key_alg hmac-sha256
|
||||
key "{file./run/secrets/CADDY_ACME_KEY}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
dev.ilnmors.internal {
|
||||
import private_tls
|
||||
@test header X-Forwarded-Host dev-test.ilnmors.com
|
||||
route @test {
|
||||
root * /usr/share/caddy
|
||||
file_server
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```yaml
|
||||
# configuration.yml
|
||||
|
||||
# ...
|
||||
# Access control configuration
|
||||
access_control:
|
||||
default_policy: 'deny'
|
||||
rules:
|
||||
# authelia portal
|
||||
- domain: 'authelia.ilnmors.internal'
|
||||
policy: 'bypass'
|
||||
- domain: 'authelia.ilnmors.com'
|
||||
policy: 'bypass'
|
||||
- domain: 'dev-test.ilnmors.com'
|
||||
policy: 'one_factor'
|
||||
# Access control for Forward_Auth
|
||||
subject:
|
||||
- 'group:admins'
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
### Quadlet
|
||||
|
||||
- File:
|
||||
- ~/data/config/containers/caddy-auth/caddy-dev.container
|
||||
|
||||
```ini
|
||||
# ~/data/config/containers/caddy-dev/caddy-dev.container
|
||||
# ~/data/config/containers/caddy-dev/caddy-dev.container
|
||||
[Quadlet]
|
||||
DefaultDependencies=false
|
||||
|
||||
[Unit]
|
||||
Description=Caddy - dev
|
||||
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
# Main Caddy and Step-CA
|
||||
ExecStartPre=%h/data/config/scripts/wait-for-it.sh -h 192.168.10.12 -p 443 -t 0
|
||||
ExecStartPre=%h/data/config/scripts/wait-for-it.sh -h 192.168.10.12 -p 9000 -t 0
|
||||
ExecStartPre=sleep 5
|
||||
|
||||
[Container]
|
||||
Image=localhost/caddy:2.10.2-dev
|
||||
|
||||
ContainerName=caddy-dev
|
||||
|
||||
PublishPort=2080:80/tcp
|
||||
PublishPort=2443:443/tcp
|
||||
|
||||
Volume=%h/data/containers/caddy-dev/etc:/etc/caddy:rw
|
||||
Volume=%h/data/containers/caddy-dev/data:/data:rw
|
||||
|
||||
Environment="TZ=Asia/Seoul"
|
||||
|
||||
Secret=CADDY_ACME_KEY,target=/run/secrets/CADDY_ACME_KEY
|
||||
|
||||
Label=diun.enable=true
|
||||
Label=diun.watch_repo=true
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
#### Create systemd `.service` file
|
||||
|
||||
```bash
|
||||
# linger has to be activated
|
||||
ln -s ~/data/config/containers/caddy-dev/caddy-dev.container ~/.config/containers/systemd/caddy-dev.container
|
||||
|
||||
systemctl --user daemon-reload
|
||||
```
|
||||
|
||||
#### Enable and start service
|
||||
|
||||
```bash
|
||||
systemctl --user start caddy-dev.service
|
||||
```
|
||||
|
||||
#### Verification
|
||||
|
||||
- https://dev-test.ilnmors.com
|
||||
- user_test (gruop: users): 403 Forbidden
|
||||
- admin_test (group: admins): File server
|
||||
341
docs/archives/2025-12/08_development/08_04_dev_code-server.md
Normal file
341
docs/archives/2025-12/08_development/08_04_dev_code-server.md
Normal file
@@ -0,0 +1,341 @@
|
||||
Tags: #os, #configuration, #security , #virtualization, #container, #development
|
||||
|
||||
## Code-Server
|
||||
|
||||
Code-Server is an open source and self-hosted Code editer (or IDE with its plugins) to use on web browser. This supports terminal and code editer, git, and ansible. It will be used as a bastion host in the home-lab. Code-Server doesn't support login system. Therefore, authelia and caddy's Forward-Auth function will be used in this homelab system.
|
||||
|
||||
### Secret management
|
||||
|
||||
- File:
|
||||
- ~/data/config/secrets/.secret.yaml
|
||||
|
||||
- Edit `.secret.yaml` with `edit_secret.sh`
|
||||
|
||||
```yaml
|
||||
# ~/data/config/secrets/.secret.yaml
|
||||
# CODE-SERVER:
|
||||
CODESERVER_SSH_KEY: SSH_KEY_VALUE
|
||||
```
|
||||
|
||||
```bash
|
||||
|
||||
ssh-keygen -t ed25519 -f /run/user/$UID/codeserver -C "code-server@ilnmors.internal"
|
||||
> [enter] # no passphrase
|
||||
|
||||
cat /run/user/$UID/codeserver
|
||||
> Private key value
|
||||
# add in .secret.yaml as CODESERVER_SSH_KEY
|
||||
cat /run/user/$UID/codeserver.pub
|
||||
> Public key value
|
||||
# add in .secret.yaml as a annotation
|
||||
|
||||
rm -rf /run/user/$UID/codeserver*
|
||||
|
||||
extract_secret.sh .secret.yaml -f CODESERVER_SSH_KEY | podman secret create CODESERVER_SSH_KEY -
|
||||
```
|
||||
|
||||
### Preparation
|
||||
|
||||
#### Create directory for container
|
||||
|
||||
```bash
|
||||
mkdir -p ~/data/containers/code-server
|
||||
chmod -R 700 ~/data/containers/code-server
|
||||
setfacl -m d:g::0 ~/data/containers/code-server
|
||||
setfacl -m d:o::0 ~/data/containers/code-server
|
||||
setfacl -m u:dev:rwx ~/data/containers/code-server
|
||||
setfacl -m u:100999:rwx ~/data/containers/code-server
|
||||
setfacl -d -m u:dev:rwx ~/data/containers/code-server
|
||||
setfacl -d -m u:100999:rwx ~/data/containers/code-server
|
||||
mkdir -p ~/data/containers/code-server/{config,local,ssh,workspace}
|
||||
echo $PUBLIC_SSH_KEY_VALUE > ~/data/containers/code-server/ssh/id_codeserver.pub
|
||||
|
||||
```
|
||||
|
||||
> Code-Server container executes as 1000:1000(coder:coder) permission in container. It is mapped host's 100999. Therefore, directories have to have ACL via `setfacl`
|
||||
|
||||
#### Add new domain in BIND
|
||||
|
||||
Following [here](../06_network/06_03_net_bind.md).
|
||||
|
||||
- net server
|
||||
- file: ~/data/containers/bind/lib/db.ilnmors.internal
|
||||
|
||||
```ini
|
||||
# ...
|
||||
code-server IN CNAME auth.ilnmors.internal.
|
||||
# ...
|
||||
```
|
||||
|
||||
```bash
|
||||
# Adguard container has Requires=bind.service. When it restarted, then Adguard also restarted.
|
||||
systemctl --user restart bind
|
||||
```
|
||||
|
||||
#### Add new rules in Caddy and authelia
|
||||
|
||||
##### caddy-auth
|
||||
|
||||
- auth server
|
||||
- file: ~/data/containers/caddy-auth/etc/Caddyfile
|
||||
|
||||
```ini
|
||||
# ...
|
||||
code-server.ilnmors.internal {
|
||||
import private_tls
|
||||
import crowdsec_log
|
||||
route {
|
||||
crowdsec
|
||||
import apply_forward_auth https://dev.ilnmors.internal
|
||||
}
|
||||
# ...
|
||||
```
|
||||
|
||||
##### caddy-dev
|
||||
|
||||
- dev server
|
||||
- file: ~/data/containers/caddy-dev/etc/Caddyfile
|
||||
|
||||
```ini
|
||||
# ...
|
||||
dev.ilnmors.internal {
|
||||
import private_tls
|
||||
# ...
|
||||
@code-server header X-Forwarded-Host code-server.ilnmors.internal
|
||||
# ...
|
||||
route @code-server {
|
||||
reverse_proxy host.containers.internal:8000 {
|
||||
# Sidecar caddy's Caddyfile should change `Host` header as original `Host` value from `X-Forwarded-Host` to prevent websocket problem.
|
||||
header_up Host {http.request.header.X-Forwarded-Host}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### authelia
|
||||
|
||||
```yaml
|
||||
# configuration.yml
|
||||
|
||||
# ...
|
||||
# Access control configuration
|
||||
access_control:
|
||||
default_policy: 'deny'
|
||||
rules:
|
||||
# authelia portal
|
||||
- domain: 'authelia.ilnmors.internal'
|
||||
policy: 'bypass'
|
||||
- domain: 'authelia.ilnmors.com'
|
||||
policy: 'bypass'
|
||||
- domain: 'dev-test.ilnmors.com'
|
||||
policy: 'one_factor'
|
||||
# Access control for Forward_Auth
|
||||
subject:
|
||||
- 'group:admins'
|
||||
- domain: 'code-server.ilnmors.internal'
|
||||
policy: 'one_factor'
|
||||
subject:
|
||||
- 'group:admins'
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
#### SSH configuration
|
||||
|
||||
##### Each server
|
||||
|
||||
- file: ~/.ssh/authorized_keys
|
||||
|
||||
```ini
|
||||
# ...
|
||||
Contents of code-server's public key value
|
||||
# ...
|
||||
```
|
||||
|
||||
##### Code-Server container
|
||||
|
||||
- file: ~/data/containers/code-server/ssh/config
|
||||
|
||||
```ini
|
||||
Host vmm
|
||||
HostName 192.168.1.10
|
||||
User vmm
|
||||
IdentityFile /run/secrets/CODESERVER_SSH_KEY
|
||||
|
||||
Host net
|
||||
HostName 192.168.10.11
|
||||
User net
|
||||
IdentityFile /run/secrets/CODESERVER_SSH_KEY
|
||||
|
||||
Host auth
|
||||
HostName 192.168.10.12
|
||||
User auth
|
||||
IdentityFile /run/secrets/CODESERVER_SSH_KEY
|
||||
|
||||
# dev is the server where code-server is located. It needs host.containers.internal
|
||||
Host dev
|
||||
HostName host.containers.internal
|
||||
User dev
|
||||
IdentityFile /run/secrets/CODESERVER_SSH_KEY
|
||||
|
||||
Host app
|
||||
HostName 192.168.10.14
|
||||
User app
|
||||
IdentityFile /run/secrets/CODESERVER_SSH_KEY
|
||||
```
|
||||
|
||||
```bash
|
||||
podman unshare chown 1000:1000 ~/data/containers/code-server/ssh/config
|
||||
```
|
||||
|
||||
### Podman Image
|
||||
|
||||
#### Podman containerfile
|
||||
|
||||
Code-server supports various module for it. Git and ansible will be used in this project.
|
||||
|
||||
- file:
|
||||
- ~/data/config/containers/code-server/containerfile-code-server-4.105.1
|
||||
- ~/data/config/containers/code-server/root_ca.crt
|
||||
|
||||
```containerfile
|
||||
FROM codercom/code-server:4.105.1
|
||||
|
||||
USER root
|
||||
|
||||
RUN export SUDO_FORCE_REMOVE=yes && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends git ansible curl jq age gnupg && \
|
||||
apt-get purge -y --auto-remove sudo && \
|
||||
apt-get clean
|
||||
|
||||
RUN curl -LO https://github.com/getsops/sops/releases/download/v3.11.0/sops-v3.11.0.linux.amd64 && \
|
||||
mv sops-v3.11.0.linux.amd64 /usr/local/bin/sops && \
|
||||
chmod +x /usr/local/bin/sops && \
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
COPY ./root_ca.crt /usr/local/share/ca-certificates/root_ca.crt
|
||||
|
||||
RUN update-ca-certificates
|
||||
|
||||
USER coder
|
||||
```
|
||||
|
||||
#### Podman image build
|
||||
|
||||
```bash
|
||||
podman build -t code-server:4.105.1 -f ~/data/config/containers/code-server/containerfile-code-server-4.105.1 . && podman image prune -f
|
||||
# Delete pure code-server
|
||||
```
|
||||
|
||||
### Quadlet
|
||||
|
||||
- File:
|
||||
- ~/data/config/containers/code-server/code-server.container
|
||||
|
||||
```ini
|
||||
# ~/data/config/containers/code-server/code-server.container
|
||||
# ~/.config/containers/systemd/code-server.container
|
||||
[Quadlet]
|
||||
DefaultDependencies=false
|
||||
|
||||
[Unit]
|
||||
Description=Code-Server
|
||||
|
||||
After=caddy-dev.service
|
||||
Wants=caddy-dev.service
|
||||
|
||||
[Container]
|
||||
Image=localhost/code-server:4.105.1
|
||||
|
||||
ContainerName=code-server
|
||||
|
||||
HostName=code-server
|
||||
|
||||
# CrowdSec uses 8080 port
|
||||
PublishPort=8000:8080/tcp
|
||||
|
||||
Volume=%h/data/containers/code-server/workspace:/home/coder/workspace:rw
|
||||
Volume=%h/data/containers/code-server/config:/home/coder/.config:rw
|
||||
Volume=%h/data/containers/code-server/local:/home/coder/.local:rw
|
||||
Volume=%h/data/containers/code-server/ssh:/home/coder/.ssh:rw
|
||||
|
||||
Environment="TZ=Asia/Seoul"
|
||||
# when you needs root permission, you have to access the container via dev server's command 'poman exec -it code-server'
|
||||
|
||||
Secret=CODESERVER_SSH_KEY,target=/run/secrets/CODESERVER_SSH_KEY
|
||||
|
||||
Label=diun.enable=true
|
||||
Label=diun.watch_repo=true
|
||||
# This label need configuration on `diun.yml`
|
||||
Label=diun.regopt=code-server-source
|
||||
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
#### Create systemd `.service` file
|
||||
|
||||
```bash
|
||||
# linger has to be activated
|
||||
ln -s ~/data/config/containers/code-server/code-server.container ~/.config/containers/systemd/code-server.container
|
||||
|
||||
systemctl --user daemon-reload
|
||||
```
|
||||
|
||||
#### Enable and start service
|
||||
|
||||
```bash
|
||||
systemctl --user start code-server.service
|
||||
```
|
||||
|
||||
#### Disable password
|
||||
|
||||
```bash
|
||||
nano ~/data/containers/code-server/config/code-server/config.yaml
|
||||
# bind-addr: 127.0.0.1:8080
|
||||
# auth: none <- edit this as `none` from `password`
|
||||
# password: <- remove this part
|
||||
# cert: false
|
||||
```
|
||||
|
||||
|
||||
#### Set default workspace
|
||||
|
||||
- Setting:Profile:Default:Folders&workspaces
|
||||
- Add Folder > /home/coder/workspace
|
||||
- Setting:Settings:Workbench:Settings Editor
|
||||
- Terminal > Integrated: Gpu Acceleration: off
|
||||
- Edit in settings.json
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
"workbench.settings.applyToAllProfiles": [
|
||||
|
||||
],
|
||||
|
||||
"workbench.colorCustomizations": {
|
||||
|
||||
"terminal.background": "#0C0C0C",
|
||||
|
||||
"terminal.foreground": "#CCCCCC"
|
||||
|
||||
},
|
||||
|
||||
"files.associations": {
|
||||
|
||||
"*.container": "ini",
|
||||
|
||||
"*.service": "ini",
|
||||
|
||||
"*.timer": "ini",
|
||||
|
||||
"containerfile*": "dockerfile"
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
#### Verification
|
||||
Reference in New Issue
Block a user