Setting up Minio for Backups
It's about time that I have a solid backup solution for my Kubernetes cluster, as well as the other services I run around here. It will also reduce my reliance on this exact cluster because currently all data is stored in Longhorn and isn't the most practical to get out. So, if I were to replace my cluster, it would be a nightmare to get my data back on.
The proper (?) way of doing this would be to join the new nodes to the existing cluster, force data replication to the new nodes, and then remove the old nodes. However, you still have no backups :3
Deploying Minio
services:
minio:
image: quay.io/minio/minio
container_name: minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: root
MINIO_ROOT_PASSWORD: rootpwd
volumes:
- ~/minio/data:/data
command: server /data --console-address ":9001"
And with a simple docker-compose up
, I now have Minio running!
Thereason I'm running this in Docker and not Kubernetes is because this setup will sit apart from my cluster (for resiliency, yknow). Maybe a cool upgrade would be to run Minio in a second cluster, allowing replication of my backups; this setup would be way overkill for my use-case.
Minio Setup
After accessing the Minio web panel through port 9001, and logging in with my super-secret credentials, I create a bucket. This bucket will store all the backups from Longhorn.
Okay, so we have our bucket, we now need a way for Longhorn to authenticate with Minio. This will be in the form of tokens. Don't forget to copy them to your text editor of choice for now.
We're nearly there! As Longhorn is on Kubernetes, I will create a secret containing these keys which Longhorn will subsequently consume to authenticate with Minio. The command for this is really simple. Just don't forget to change the values.
kubectl create secret generic minio-auth \
--from-literal=AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \
--from-literal=AWS_SECRET_ACCESS_KEY=<your-aws-secret-access-key> \
--from-literal=AWS_ENDPOINTS=http://192.168.1.199:9000
-n longhorn-system
Then in the Longhorn UI, I set the target bucket, as well as the secret which must be used to authenticate.
Usually I would have to specify the AWS S3 region but, as this is a self hosted Minio instance, I don't have any regions. So I just leave it with a dummy value so the parser is satisfied.
This is the volume inspect screen in Longhorn. I just created a backup of my ghost website, as you can see in the snapshots and backups tab.
On the backups tab, I can also see that the PVC has been backed up. And to confirm, I can go check the object browser on Minio
And finally, I create a recurring job for all my volumes to back them up at midnight every day. I can have a retention policy and I'll maybe set it to 2 if the backed up data doesn't take up that much space.
This was a pretty image-filled blogpost, although Minio has a CLI you can use together with the server for creating buckets, users, permissions etc...
Thanks for reading till here, and don't forget: there are two kinds of people, those who backup their data and those who haven't yet lost their data.