Setting up an raspberrypi4 k3s-cluster with nfs persistent-storage

Michael Tissen
3 min readJun 10, 2020

--

There are not many options to add persistent-storage to a k3s raspberry cluster. I will present you a relative simple and powerfull method with the nfs-client-provisioner.

UPDATE: Updated to new NFS-Client provisioner repository and kubernetes version

Install k3s

Install on Master

curl -sfL https://get.k3s.io | sh -

Check with systemctl if its running

sudo systemctl status k3s

Get token for worker-nodes

sudo cat /var/lib/rancher/k3s/server/node-token
K108e1a745eb43ad8cf51a08534545340dbf0c8fcdfa3b76a3fb86c837c75a1::node:be4429c2343d37b779e54a45656557e2

Join a worker

export K3S_URL="https://your_ip_or_url_of_master:6443"
export K3S_TOKEN="K108e1a745eb43ad8cf51a08534545340dbf0c8fcdfa3b76a3fb86c837c75a1::node:be4429c2343d37b779e54a45656557e2"
curl -sfL https://get.k3s.io | sh -

Verify your cluster

$ kubectl get nodeNAME    STATUS   ROLES    AGE   VERSION
node1 Ready master 58m v1.14.5-k3s.1
node2 Ready worker 41s v1.14.5-k3s.1

Master and worker are ready, now you can start with the fun part. :-)

Install a nfs-server

I’ve created a folder named nfs in the home-directory on my nfs-server with following nfs-exports(all in one line):

/home/my_user/nfs 192.168.178.0/24(rw,all_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

and my nfs-server Hostname is my-srv (the IP can also be used instead).

Install the nfs-client-provisioner to the cluster

The nfs-client-provisioner automaticly provision storage(persistent volumes) for your applications on a nfs-server. You only need to specify a Persistent-Volume-Claim and use this claim in your deployments

Download necessary files for deploing the provisioner:

wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/raw/master/deploy/rbac.yamlwget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/raw/master/deploy/class.yamlwget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/deploy/deployment-arm.yaml

Install nfs-utils on all kubernetes nodes (on the hosts):

(On newer kubernetes versions this is needed for nfs to work)

apt-get install -y nfs-common

Edit deployment-arm.yaml
Replace the bold text with your nfs-server hostname and path:

kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: my-srv
- name: NFS_PATH
value: /home/my_user/nfs
volumes:
- name: nfs-client-root
nfs:
server: my-srv
path: /home/my_user/nfs

Execute scripts:

sudo kubectl create -f rbac.yaml
sudo kubectl create -f deployment-arm.yaml
sudo kubectl create -f class.yaml

Deploy Ghost-Blog

As a example i will deploy ghost-blog. It uses sqlite3 for persistent storage.

Create a PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"

labels:
app: blog
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
sudo kubectl create -f volume.yaml

To use the nfs-storage its important to set the storage class in annotations! The name of the claim is used in the deployment below.

Create a deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
labels:
app: blog
spec:
replicas: 2
selector:
matchLabels:
app: blog
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: ghost:2.28-alpine
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
- name: url
value: http://my-blog.com
volumeMounts:
- mountPath: /var/lib/ghost/content
name: content
volumes:
- name: content
persistentVolumeClaim:
claimName: ghost-pv-claim
sudo kubectl create -f deployment.yaml

The mountPath will be mounted to the persistentVolume.

Check if the PVC is working correctly:

To check if the persistent storage with nfs is working as expected you can check the created folder on your nfs-server:

~/nfs$ ls
default-ghost-pv-claim-pvc-efae13ea-c131-11e9-8496-dca63208ed94
cd default-ghost-pv-claim-pvc-efae13ea-c131-11e9-8496-dca63208ed94/nfs/default-ghost-pv-claim-pvc-efae13ea-c131-11e9-8496-dca63208ed94$ ls
apps data images logs settings themes

You can see the persistent data of ghost on the nfs-server

Harvest the fruits

You can access your ghost-deployment with port-forwarding:

kubectl port-forward <pod-name> 2368 

Go to localhost on your browser

Our ghost application running in k3s on raspberry cluster!!

Hurray!!

Whats next

  • Using traefik as an ingress-controller for access the blog from the outside with https-support with let’s encrypt.
  • Using Postgres instead of sqlite3

--

--

Michael Tissen
Michael Tissen

Written by Michael Tissen

Loves sport, software-engineering and music

No responses yet