Better phrasing for git.lan.porkbrain.com documentation
Adding a CNAME record for git.lan.porkbrain.com
Configuring and documenting Gogs git server
Welcome to the localnet, have a look around.
(or click here for technical documentation)
Do any operation on PDFs you can imagine:
(click here for docs)
Bookmarker and aggregator of links. Unfortunately, it's a single user app. The aesthetics are not the best but the killer feature is a bulk import.
(click here for docs)
RSS feed reader. Ask your friendly admin to open registration.
(click here for docs)
Gives you access to IRC channel #ebooks
on irc.irchighway.net
.
You can search for books and download them.
Supports only one user at a time.
Please download only public domain books.
(click here for docs)
A note-taking app. Ask your friendly admin to open registration.
(click here for docs)
OCI compliant image registry. Anyone on LAN can read and write.
(click here for docs)
A Git server and issue-tracker.
(click here for docs)
Provides insights into the cluster metrics.
(click here for docs)
This is a password-protected network-wide ad blocker. It blocks ads on all devices connected to the network.
(click here for docs)
At the moment the setup comprises two Pi4s with PoE hats connected to a CISCO 8-port switch. They translate into two nodes in a k8s cluster:
alpha
beta
(master)With Pi Imager I installed Ubuntu server on both nodes. For the Pi Imager configuration I used:
porkbrain
;Once the Pis booted I assigned them static IPs in the router. Read more about the DNS setup here.
I then ssh'd into both nodes and performed following setup.
I installed microk8s
on both nodes with:
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
su - $USER
Then I reset the ssh session and disabled HA on both nodes with:
microk8s disable ha-cluster
On the beta
master node I ran the following to join the alpha
node:
microk8s add-node
I enabled these addons on beta
:
microk8s enable dns
microk8s enable ingress
microk8s enable cert-manager
microk8s enable observability
microk8s enable dashboard
For convenience I added these aliases to the .bashrc
:
echo "alias k='microk8s kubectl'" >> ~/.bashrc
echo "alias kubectl='microk8s kubectl'" >> ~/.bashrc
source ~/.bashrc
Once all was set up I rsync'd the configs from my machine to beta
and applied them:
rsync -avz ./configs/ porkbrain@beta.lan:~/configs/
ssh porkbrain@beta.lan "microk8s kubectl apply -f ~/configs/"
Each service has a certificate issued by Let's Encrypt.
We use microk8s
's cert-manager
addon to issue certificates for each service.
Since our porkbrain.com.
domain is hosted on AWS Route53 but LAN is not accessible from the internet, we use DNS-01 challenges.
There's an AWS IAM user with the necessary permissions that cert-manager
needs to solve the challenge.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"route53:GetChange",
"route53:GetHostedZone",
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets",
"route53:UpdateHostedZoneComment"
],
"Resource": [
"arn:aws:route53:::hostedzone/{HOSTED_ZONE_ID}",
"arn:aws:route53:::change/{HOSTED_ZONE_ID}"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListHostedZonesByName"
],
"Resource": "*"
}
]
}
Then we create a secret with the credentials of the IAM user:
k create secret generic route53-credentials-secret -n cert-manager \
--from-literal=access-key-id=<YOUR_ACCESS_KEY_ID> \
--from-literal=secret-access-key=<YOUR_SECRET_ACCESS_KEY>
Now each service can define TLS secret in its ingress manifest and cert-manager
will take care of the rest.
This app exposes StirlingPDF.
Their images come in three flavors: ultra-light
, full
(default) and fat
.
We use the full
version which has all the features but does not contain Calibre.
The image is about 900MB.
I observed the container to be idle at 300MB of RAM.
It's a stateless container.
I couldn't change the web server's port from the default 8080
.
The container runs on alpha
node.
This app exposes Shaarli.
Its flat file stores all data in a 64Mi persistent volume.
Its memory footprint is 60Mi which is high for what it does.
The container runs on alpha
node.
This app exposes FreshRSS.
It has a 1Gi persistent volume for the SQLite database.
It fetches new feed once an hour which can be configured with CRON_MIN
env.
For example, to fetch twice an hour the env would be set to 1,31
.
The container runs on alpha
node.
This app exposes openbooks.
It has a 1Gi persistent volume to keep the downloaded books available.
Since this is IRC, we need a username to connect:
k create secret generic openbooks-name-secret --from-literal=name_value=xxx -n openbooks
We don't want to advertise to the world what public domain books are we downloading. The service should be behind a VPN. We attempted to set up VPN gateway for this service, using Mullvad as the VPN. It works with this minimal docker compose:
---
version: "3"
services:
wireguard:
image: ghcr.io/wfg/wireguard
container_name: wireguard
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
cap_add:
- NET_ADMIN
environment:
- ALLOWED_SUBNETS=192.168.0.0/24,192.168.1.0/24
volumes:
- path/to/wireguard.conf:/etc/wireguard/wg0.conf
ports:
- 8080:8080
openbooks:
container_name: openbooks
command: --name openbooks --port 8080
image: evanbuss/openbooks:latest
network_mode: service:wireguard
The wireguard config can be downloaded from Mullvad account.
Unfortunately, with k8s it's more work. It seems the VPN gateway container must run in the same pod as the openbooks container. We need to publish https://github.com/wfg/docker-wireguard into the zot registry for ARM64. Find more information on what to do next in the app's k8s configuration file.
The container runs on alpha
node.
This app exposes Memos.
It has a 1Gi persistent volume for the SQLite database.
The container runs on alpha
node.
This app exposes zot using their image.
We mount two volumes.
One for configuration as zot uses the main config.json
file for virtually everything.
In the config we declare that all images should be anonymously readable and anyone can push images.
There's no update allowed, it's an append only registry.
The second volume is for the images themselves and has 10Gi.
The ingress controller is configured with 10 minutes of timeout and no limit to the body size.
The container runs on beta
master node.
This app exposes Gogs. When freshly installed, it will go through a setup wizard.
We mount a single volume which contains the git data, configs, logs, ssh keys and SQLite database.
We define one init container. This container writes our own configuration that overrides the default one to the aforementioned volume. When the setup wizard is launched the default settings are pre-filled. The init container is only relevant once: the first time the app is deployed. It runs idempotently because it checks for existence of our config file and does nothing if the file is present.
The main container exposes a webserver on port 3000.
Binding it on port 80 does not work as the container runs as git
user and does not have permission to bind to ports below 1024.
More interestingly, the container runs an ssh server on port 22 and exposes it to the host over port 50022.
When cloning a repository over ssh the remote repository is in the form of ssh://git@git.lan.porkbrain.com:50022/user/repo.git
The container runs on beta
master node.
If you cannot clone a repository over ssh, it's possible that the public ssh you provided in the account settings was not stored in the container's /data/git/.ssh/authorized_keys
.
To fix this, you can go to admin settings and trigger action "Rewrite '.ssh/authorized_keys' file".
If the server is returning an error "Invalid csrf token." then you need to delete the csrf cookie in your browser.
By enabling the undocumented observability
addon on the alpha
master node we get a Grafana instance running in container kube-prom-stack-grafana
in the observability
namespace.
When freshly installed it had a default user admin
with password prom-operator
.
This app is a Pi-hole instance. We use it as the DNS server for the LAN. These are the DNS settings:
alpha.lan 192.168.50.132
beta.lan 192.168.50.89
asusrouter.com 192.168.50.1
These records are stored in the container file /etc/pihole/custom.list
.
The IPs for both Pi4s are statically assigned in the router DHCP settings. The Pi-hole container exposes port 53 over both TCP and UDP to the host. The node's IP is the used as the primary DNS server for the LAN. This is also configured in the router.
We then store these CNAME records:
ebooks.lan.porkbrain.com alpha.lan
grafana.lan.porkbrain.com beta.lan
link.lan.porkbrain.com alpha.lan
pdf.lan.porkbrain.com alpha.lan
pihole.lan.porkbrain.com beta.lan
rss.lan.porkbrain.com alpha.lan
x.lan.porkbrain.com alpha.lan
zot.lan.porkbrain.com beta.lan
git.lan.porkbrain.com beta.lan
These records are stored in the container file /etc/dnsmasq.d/05-pihole-custom-cname.conf
.
The beta
node is the cluster master node.
Each service is statically assigned to a node.
The Pi-hole itself uses Cloundflare (DNS1=1.1.1.1
) and Google (DNS2=8.8.8.8
) DNS servers.
The password to access the web interface is stored in a secret pihole-secret
under the key WEBPASSWORD
.
k create secret generic pihole-secret --from-literal=WEBPASSWORD='your-password' -n pihole
The container itself is stateful with 500Mi of storage as recommended in the pihole kubernetes docs.
It exposes the web interface on port 80 to a service that's behind an ingress controller.
Because the webserver has the admin interface on path /admin
we use an nginx app-root
annotation to redirect the root path to /admin
.
Ideally, we would rewrite the path but it's unclear how.
The container runs on beta
master node.