Exciting Announcement: New Kubernetes Cluster on Raspberry Pi

I’m thrilled to announce the successful installation of a brand-new Kubernetes cluster on four Raspberry Pi devices! This project, utilizing Bookworm, Ansible, and NFS, showcases the potential of combining powerful software tools with the versatility of Raspberry Pi hardware.

Project Details

Hardware Setup:

  • Devices: 4 Raspberry Pi 4 units
  • Storage: M.2 256GB USB drives (superior to traditional SD cards for reliability and speed)
  • Networking: A small network switch for robust cabled connections

Software Stack:

  • Operating System: Raspberry Pi OS 64-bit Lite (Bookworm)
  • Automation: Ansible for automated and consistent setup
  • Storage: NFS for shared, reliable storage

Installation Overview

The installation process is impressively quick and efficient, taking about 30 minutes from start to finish:

  • 15 minutes: Installing the OS and necessary dependencies using Ansible
  • 15 minutes: Setting up Kubernetes with one master and three nodes, including auto-provisioned storage via NFS

Services Deployed

As part of this new installation, several key services have already been deployed on the cluster using kubectl:

  • Portainer: For managing Kubernetes environments
  • NetAlert: For network monitoring
  • Prometheus and Grafana: For monitoring and visualization
  • Minecraft Server: For gaming and experimentation
  • Homepage Dashboard: For a personalized user interface
  • Searxng: For metasearch engine capabilities

What’s Next?

In the coming days, I will be posting detailed guides and Ansible scripts for setting up these services on my homepage. These resources will include step-by-step instructions to help you replicate this setup and customize it for your own needs.

Stay tuned for more updates and detailed tutorials on my homepage. This new installation demonstrates the impressive capabilities of Kubernetes on Raspberry Pi, and I’m excited to share more about this journey with you.

Thank you for your interest, and look forward to the upcoming posts with detailed guides and scripts!




How to Backup Docker Data to a Different Location in Your LAN

Prerequisites

  • Docker data located at /var/lib/docker/volumes.
  • SSH access to the target backup system.

Passwordless SSH Login

First, set up passwordless SSH login:

ssh-keygen -t rsa
ssh-copy-id root@192.168.0.225
ssh root@192.168.0.225

Docker Volume Backup Script

Create a backup script named docker_backup.sh:

#!/bin/bash
set -e

# Define variables
source_dir="/var/lib/docker/volumes"
backup_dir="/opt/docker_backups"
keep_backups=10
current_datetime=$(date +"%Y-%m-%d_%H-%M-%S")
backup_filename="$current_datetime-backup.tar"
remote_user="root"
remote_server="192.168.0.225"
remote_dir="/opt/remote_docker_backups"

# Check if source and backup directories exist
if [ ! -d "$source_dir" ]; then
  echo "Source directory does not exist."
  exit 1
fi
if [ ! -d "$backup_dir" ]; then
  echo "Backup directory does not exist."
  exit 1
fi

# Stop running Docker containers
if [ "$(docker ps -q)" ]; then
  docker stop $(docker ps -q)
fi

# Create the backup
tar -cpf "$backup_dir/$backup_filename" "$source_dir"

# Start stopped Docker containers
if [ "$(docker ps -a -q)" ]; then
  docker start $(docker ps -a -q)
fi

# Compress and transfer the backup
gzip "$backup_dir/$backup_filename"
backup_filename="$current_datetime-backup.tar.gz"
scp "$backup_dir/$backup_filename" "$remote_user@$remote_server:$remote_dir"

# Remove old backups
find "$backup_dir" -type f -name "*-backup.tar.gz" -mtime +$keep_backups -exec rm {} \;
ssh "$remote_user@$remote_server" "find $remote_dir -type f -name '*-backup.tar.gz' -mtime +$keep_backups -exec rm {} \;"

echo "Backup was created: $backup_dir/$backup_filename and copied to $remote_server:$remote_dir."

Run the script:

sudo su
chmod +x docker_backup.sh
./docker_backup.sh

Ansible Alternative

Create an Ansible playbook named docker_backup.yml:

---
- name: Docker Backup Playbook
  hosts: rpidocker
  become: yes
  vars:
    source_dir: "/var/lib/docker/volumes"
    backup_dir: "/opt/docker_backups"
    keep_backups: 10
    current_datetime: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
    backup_filename: "{{ current_datetime }}-backup.tar"
    remote_user: "root"
    remote_server: "192.168.0.225"
    remote_dir: "/opt/remote_docker_backups"

  tasks:
    - name: Check if source directory exists
      stat:
        path: "{{ source_dir }}"
      register: source_dir_stat

    - name: Fail if source directory does not exist
      fail:
        msg: "Source directory does not exist."
      when: not source_dir_stat.stat.exists

    - name: Check if backup directory exists
      stat:
        path: "{{ backup_dir }}"
      register: backup_dir_stat

    - name: Fail if backup directory does not exist
      fail:
        msg: "Backup directory does not exist."
      when: not backup_dir_stat.stat.exists

    - name: Stop running Docker containers
      command: docker stop $(docker ps -q)
      ignore_errors: yes

    - name: Create backup archive
      command: tar -cpf "{{ backup_dir }}/{{ backup_filename }}" "{{ source_dir }}"

    - name: Start all Docker containers
      command: docker start $(docker ps -a -q)
      ignore_errors: yes

    - name: Compress the backup archive
      command: gzip "{{ backup_dir }}/{{ backup_filename }}"
      args:
        chdir: "{{ backup_dir }}"

    - name: Copy backup to remote server
      synchronize:
        src: "{{ backup_dir }}/{{ backup_filename }}.gz"
        dest: "{{ remote_user }}@{{ remote_server }}:{{ remote_dir }}"
        mode: push

    - name: Delete older backups locally
      shell: find "{{ backup_dir }}" -type f -name "*-backup.tar.gz" -mtime +{{ keep_backups }} -exec rm {} \;

    - name: Delete older backups on remote server
      shell: ssh "{{ remote_user }}@{{ remote_server }}" "find {{ remote_dir }} -type f -name '*-backup.tar.gz' -mtime +{{ keep_backups }} -exec rm {} \;"

Run the playbook:

ansible-playbook -i inventory.ini docker_backup.yml

Your inventory.ini should look like:

[rpidocker]
192.168.0.224 ansible_user=root ansible_ssh_private_key_file=/path/to/your/private/key

Conclusion

You now have two methods to back up your Docker data securely to another location within your LAN. Choose the one that best fits your needs.




Automating Neofetch Installation and Configuration using Ansible

Introduction:
Neofetch is a simple and visually appealing command-line system information tool. By using Ansible, we can automate the installation of Neofetch and its configuration, including adding it to the .bashrc file and displaying the Raspberry Pi’s temperature. In this guide, we’ll walk you through creating an Ansible playbook to achieve this automation.

Step 1: Install Ansible
Before proceeding, make sure you have Ansible installed on your system. If you haven’t installed Ansible yet, you can follow the official Ansible installation instructions for your operating system.

Step 2: Create the Ansible Playbook
Create a new YAML file named neofetch_setup.yaml and add the following content to it:

---
- name: Install Neofetch and update .bashrc
  hosts: your_host  # Replace "your_host" with the target host or group where you want to install Neofetch.
  become: yes

  tasks:
    - name: Install Neofetch
      apt:
        name: neofetch
        state: present

    - name: Add Neofetch to .bashrc
      blockinfile:
        path: ~/.bashrc
        block: |
          if [[ -z $(grep -Fxq "neofetch" ~/.bashrc) ]]; then
              echo "neofetch" >> ~/.bashrc
          fi

    - name: Add vcgencmd measure_temp to .bashrc
      blockinfile:
        path: ~/.bashrc
        block: |
          if [[ -z $(grep -Fxq "vcgencmd measure_temp" ~/.bashrc) ]]; then
              echo "vcgencmd measure_temp" >> ~/.bashrc
          fi

In the playbook, make sure to replace “your_host” with the target host or group where you want to install Neofetch. Also, ensure that your target host has the necessary privileges (sudo access) to install packages and modify the .bashrc file.

Step 3: Run the Ansible Playbook
To run the Ansible playbook and install Neofetch on the target host, use the following command:

ansible-playbook neofetch_setup.yaml

Ansible will connect to the target host, install Neofetch, and update the .bashrc file with the appropriate commands.

Step 4: Verify Neofetch Installation
To verify that Neofetch is installed and configured correctly, log in to the target host and open a new terminal. You should see the Neofetch output displaying system information. Additionally, the Raspberry Pi’s temperature will be displayed along with the system details.

Conclusion:
You’ve successfully created an Ansible playbook to automate the installation of Neofetch and its configuration on your Raspberry Pi. Ansible allows you to manage multiple hosts efficiently and ensure consistent setups across your infrastructure. Now you can enjoy using Neofetch to get a stylish system summary each time you open a terminal.

Happy system information tracking!




Automating Raspberry Pi Configuration with Ansible

Introduction:
Managing multiple Raspberry Pi devices can be time-consuming and error-prone if done manually. Fortunately, Ansible provides a powerful solution for automating configuration tasks across your Raspberry Pi fleet. In this guide, we’ll walk you through setting up Ansible, creating an inventory, and using playbooks to perform various tasks on your Raspberry Pi devices.

Prerequisites:
Before proceeding, make sure you have Python3 and pip installed on your Raspberry Pi. You can install pip using the following command:

sudo apt install python3-pip

Step 1: Install Ansible
First, let’s install Ansible using pip:

python3 -m pip install --user ansible-core==2.12.3

Step 2: Create Ansible Configuration
Now, create a directory for your Ansible project and add an ansible.cfg file inside it:

mkdir /ansible
vi /ansible/ansible.cfg

Paste the following configuration in the ansible.cfg file:

[defaults]
inventory = inventory
host_key_checking = False

Save the file and exit the editor.

Step 3: Create the Inventory
The inventory file contains information about the hosts (Raspberry Pi devices) you want to manage. Create an inventory file inside the /ansible directory:

vi /ansible/inventory

Add the following content to the inventory file:

[tower]
RPIT1 ansible_host=192.168.0.220
RPIT2 ansible_host=192.168.0.221
RPIT3 ansible_host=192.168.0.222
RPTT4 ansible_host=192.168.0.223

[test]
RPI2GB ansible_host=192.168.0.227

[docker]
RPIDOCKER ansible_host=192.168.0.224

[kali]
RPIKALI ansible_host=192.168.0.226

[zeros]
RPIZEROW ansible_host=192.168.0.228 
RPIZEROW2 ansible_host=192.168.0.225

[all:vars]
ansible_ssh_user=pi ansible_ssh_pass=xxxxxxxxx

This inventory defines groups of hosts (towers, test, docker, kali, and zeros) and sets the SSH username and password to access them.

Step 4: Test Host Reachability
Before proceeding with any configuration, make sure all defined hosts are reachable using Ansible’s ping module:

ansible all -m ping

Step 5: Check OS Information
Retrieve OS information on all clients using the shell module:

ansible all -m shell -a "lsb_release -a"

Step 6: Upgrade Servers
Use a playbook to upgrade all servers:

ansible-playbook -e "target_host=all" update_upgrade.yaml

Step 7: Install Neofetch and Temperature
Create a playbook to install Neofetch and display the system temperature:

ansible-playbook -e "target_host=all" install_neofetch.yaml

Step 8: Shutdown the Tower
Create a playbook to shut down the tower (remember Docker always on):

ansible-playbook -e "target_host=tower" shutdown_now.yaml

Conclusion:
You’ve successfully set up Ansible and used playbooks to automate tasks on your Raspberry Pi devices. Ansible simplifies managing multiple devices and allows you to perform configurations efficiently and consistently. With Ansible, you can now spend less time on repetitive tasks and focus more on your Raspberry Pi projects.

Happy automating!