How to Backup Docker Data to a Different Location in Your LAN

Prerequisites

  • Docker data located at /var/lib/docker/volumes.
  • SSH access to the target backup system.

Passwordless SSH Login

First, set up passwordless SSH login:

ssh-keygen -t rsa
ssh-copy-id root@192.168.0.225
ssh root@192.168.0.225

Docker Volume Backup Script

Create a backup script named docker_backup.sh:

#!/bin/bash
set -e

# Define variables
source_dir="/var/lib/docker/volumes"
backup_dir="/opt/docker_backups"
keep_backups=10
current_datetime=$(date +"%Y-%m-%d_%H-%M-%S")
backup_filename="$current_datetime-backup.tar"
remote_user="root"
remote_server="192.168.0.225"
remote_dir="/opt/remote_docker_backups"

# Check if source and backup directories exist
if [ ! -d "$source_dir" ]; then
  echo "Source directory does not exist."
  exit 1
fi
if [ ! -d "$backup_dir" ]; then
  echo "Backup directory does not exist."
  exit 1
fi

# Stop running Docker containers
if [ "$(docker ps -q)" ]; then
  docker stop $(docker ps -q)
fi

# Create the backup
tar -cpf "$backup_dir/$backup_filename" "$source_dir"

# Start stopped Docker containers
if [ "$(docker ps -a -q)" ]; then
  docker start $(docker ps -a -q)
fi

# Compress and transfer the backup
gzip "$backup_dir/$backup_filename"
backup_filename="$current_datetime-backup.tar.gz"
scp "$backup_dir/$backup_filename" "$remote_user@$remote_server:$remote_dir"

# Remove old backups
find "$backup_dir" -type f -name "*-backup.tar.gz" -mtime +$keep_backups -exec rm {} \;
ssh "$remote_user@$remote_server" "find $remote_dir -type f -name '*-backup.tar.gz' -mtime +$keep_backups -exec rm {} \;"

echo "Backup was created: $backup_dir/$backup_filename and copied to $remote_server:$remote_dir."

Run the script:

sudo su
chmod +x docker_backup.sh
./docker_backup.sh

Ansible Alternative

Create an Ansible playbook named docker_backup.yml:

---
- name: Docker Backup Playbook
  hosts: rpidocker
  become: yes
  vars:
    source_dir: "/var/lib/docker/volumes"
    backup_dir: "/opt/docker_backups"
    keep_backups: 10
    current_datetime: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
    backup_filename: "{{ current_datetime }}-backup.tar"
    remote_user: "root"
    remote_server: "192.168.0.225"
    remote_dir: "/opt/remote_docker_backups"

  tasks:
    - name: Check if source directory exists
      stat:
        path: "{{ source_dir }}"
      register: source_dir_stat

    - name: Fail if source directory does not exist
      fail:
        msg: "Source directory does not exist."
      when: not source_dir_stat.stat.exists

    - name: Check if backup directory exists
      stat:
        path: "{{ backup_dir }}"
      register: backup_dir_stat

    - name: Fail if backup directory does not exist
      fail:
        msg: "Backup directory does not exist."
      when: not backup_dir_stat.stat.exists

    - name: Stop running Docker containers
      command: docker stop $(docker ps -q)
      ignore_errors: yes

    - name: Create backup archive
      command: tar -cpf "{{ backup_dir }}/{{ backup_filename }}" "{{ source_dir }}"

    - name: Start all Docker containers
      command: docker start $(docker ps -a -q)
      ignore_errors: yes

    - name: Compress the backup archive
      command: gzip "{{ backup_dir }}/{{ backup_filename }}"
      args:
        chdir: "{{ backup_dir }}"

    - name: Copy backup to remote server
      synchronize:
        src: "{{ backup_dir }}/{{ backup_filename }}.gz"
        dest: "{{ remote_user }}@{{ remote_server }}:{{ remote_dir }}"
        mode: push

    - name: Delete older backups locally
      shell: find "{{ backup_dir }}" -type f -name "*-backup.tar.gz" -mtime +{{ keep_backups }} -exec rm {} \;

    - name: Delete older backups on remote server
      shell: ssh "{{ remote_user }}@{{ remote_server }}" "find {{ remote_dir }} -type f -name '*-backup.tar.gz' -mtime +{{ keep_backups }} -exec rm {} \;"

Run the playbook:

ansible-playbook -i inventory.ini docker_backup.yml

Your inventory.ini should look like:

[rpidocker]
192.168.0.224 ansible_user=root ansible_ssh_private_key_file=/path/to/your/private/key

Conclusion

You now have two methods to back up your Docker data securely to another location within your LAN. Choose the one that best fits your needs.




Setting Up Omada Controller on a Raspberry Pi 4 with Docker

Introduction

Managing TP-Link EAP devices becomes a breeze when you have a centralized controller. In this guide, we’ll walk through the steps to set up an Omada Controller on a Raspberry Pi 4 using Docker. This is an excellent solution for both home and small business networks.

Prerequisites

  • Raspberry Pi 4 with 4GB RAM
  • Docker installed on the Raspberry Pi
  • SSH access to the Raspberry Pi

Step-by-Step Guide

Step 1: SSH into Your Raspberry Pi

First, connect to your Raspberry Pi using SSH. This will allow you to execute commands remotely.

Step 2: Pull the Omada Controller Docker Image

Run the following command to pull the latest Omada Controller Docker image:

docker pull mbentley/omada-controller:latest

Step 3: Create Data Directories

Create directories to store Omada Controller’s data and work files:

mkdir -p /opt/tplink/OmadaController/data
mkdir -p /opt/tplink/OmadaController/work

Step 4: Run the Omada Controller Container

Execute the following command to run the Omada Controller container:

docker run -d \
  --name omada-controller \
  --restart unless-stopped \
  -e TZ='Europe/Copenhagen' \
  -e SMALL_FILES=false \
  -p 8088:8088 \
  -p 8043:8043 \
  -p 27001:27001/udp \
  -p 27002:27002 \
  -p 29810:29810/udp \
  -p 29811:29811 \
  -p 29812:29812 \
  -p 29813:29813 \
  -v /opt/tplink/OmadaController/data:/opt/tplink/EAPController/data \
  -v /opt/tplink/OmadaController/work:/opt/tplink/EAPController/work \
  mbentley/omada-controller:latest

Step 5: Access the Omada Controller

Finally, open a web browser and navigate to https://<Raspberry_Pi_IP>:8043. Follow the setup wizard to complete the installation.

Note: Replace <Raspberry_Pi_IP> with the actual IP address of your Raspberry Pi.

Conclusion

You’ve successfully set up an Omada Controller on your Raspberry Pi 4 using Docker. This will help you manage your TP-Link EAP devices efficiently. If you have any questions or run into issues, feel free to reach out.


Feel free to add this to your homepage, and let me know if you’d like any adjustments.




How to Backup and Restore Docker Containers and Volumes

Docker has revolutionized the way we develop, package, and deploy applications. But like any system, it’s crucial to have backups. In this post, we’ll walk through the process of backing up Docker containers and volumes to an external hard drive and restoring them on another Docker server.

Backing Up Docker Containers:

  1. Commit the Container:
    Before you can backup a container, you need to commit any changes made inside it to an image.
   docker commit <container_id> <backup_image_name>

  1. Save the Image:
    Once committed, save the image to a tarball.
   docker save -o <path_to_backup_image.tar> <backup_image_name>

Backing Up Docker Volumes:

  1. Locate the Volume:
    Identify where Docker stores its volume data.
   docker volume inspect <volume_name>

Look for the “Mountpoint” in the output.

  1. Copy the Data:
    Copy the volume data to your external hard drive.
   sudo cp -r <mountpoint_path> /path/to/external/hdrive/

Restoring on Another Docker Server:

  1. Load the Image:
    Transfer the tarball to the new Docker server and load the image.
   docker load -i <path_to_backup_image.tar>

  1. Run the Container:
    Create and run a new container from the backed-up image.
   docker run -d <backup_image_name>

  1. Restore Volume Data:
    Copy the volume data from your external hard drive to the appropriate location on the new Docker server.

Conclusion:

While Docker provides a seamless environment for application development and deployment, ensuring data safety is paramount. Regularly backing up containers and volumes ensures that you can quickly recover from unforeseen issues. Whether you’re migrating to a new server or recovering from a disaster, these steps will help you restore your Docker environment with ease.


I hope this blog post provides a clear guide on backing up and restoring Docker containers and volumes. Always remember to test your backups to ensure they can be restored correctly. Happy Dockering!




A Quick Guide to Setting Up the Homer Dashboard in a Docker Container

The Homer dashboard is a simple, customizable, and self-hosted dashboard that allows users to centralize their most frequently used links, services, and tools in one place. It’s especially useful for those who manage a homelab or multiple services. Here’s a quick guide on how to set it up using Docker:

1. Deploying the Homer Dashboard with Docker:

To deploy the Homer dashboard using Docker, use the following command:

docker run -d --restart always --name homer -p 8090:8080 -v homer_data:/www/assets --restart=always b4bz/homer:latest

2. Customizing the Homepage with config.yml:

Homer allows for easy customization of its homepage through the config.yml file. Here’s how you can make it your own:

  • Using Icons: For a personalized touch, you can use icons as logos for your dashboard. Homer defaults to using icons from Font Awesome, making it easy to find and implement your preferred icons.
  • Adding Links to the Navigation Bar: Centralize your most-used internet links in the navigation bar. Here’s a sample configuration:
links:
  - name: "Google"
    icon: "fab fa-google"
    url: "https://google.com"
    target: "_blank"
  ...

  • Organizing Services and Devices: Under the services section, you can categorize and list down links to your devices and services. Here’s how you can structure it:
services:
  - name: "Network"
    icon: "fas fa-network-wired"
    items:
      - name: "pfSense"
        icon: "fas fa-fire"
        subtitle: "pfSense firewall"
        tag: "network"
        url: "https://192.168.0.1/"
        target: "_blank"
  ...

3. Editing the config.yml File:

To customize your dashboard, you’ll need to edit the config.yml file. Here’s how:

  • Using Linux: Navigate to the directory containing the config.yml file and make a backup before editing:
sudo su
cd /var/lib/docker/volumes/homer_data/_data
cp config.yml config.old
vi config.yml

  • Using Docker: Access the container and navigate to the directory containing the config.yml file. Again, make sure to backup the file before making changes:
docker exec -it homer /bin/sh
cd assets
cp config.yml config.old
vi config.yml

4. Why Use the Homer Dashboard?

The Homer dashboard is not just about aesthetics. It provides a centralized location to access the homepages of your homelab devices. Plus, with features like dark mode, tags, and search functionality, it enhances the user experience, making navigation smoother and more intuitive.

In conclusion, the Homer dashboard is a must-have for anyone looking to organize their digital workspace. With its easy setup and customization options, it’s a tool that can adapt to any user’s needs.




Setting Up SMTP Mail to Pushover Notification with Docker on Raspberry Pi

Setting Up SMTP Mail to Pushover Notification with Docker on Raspberry Pi

Are you looking for a way to easily push SMTP emails to your Pushover notifications using your Raspberry Pi? With the power of Docker and the smtp-pushover image by mattbun, you can set up a simple solution that allows you to send emails to your Raspberry Pi’s port 25 and have the messages forwarded to your Pushover account. In this guide, we’ll walk you through the process step by step.

Prerequisites:

  1. Raspberry Pi with Docker installed.
  2. Basic understanding of Docker and Docker Compose.
  3. Pushover account with User Key and API Token.

Step 1: Pull the Docker Image

The first step is to pull the smtp-pushover Docker image from the GitHub Container Registry. Open a terminal on your Raspberry Pi and execute the following command:

docker pull ghcr.io/mattbun/smtp-pushover:main@sha256:bb4a333892e612edbff843c0a8c79112ff0e61a2e605ebcce4755701513f5f38

Step 2: Configure Docker Compose

Create a docker-compose.yml file in a directory of your choice. Copy and paste the following configuration into the file:

version: '3'

services:
  smtp-pushover:
    restart: unless-stopped
    container_name: smtp-pushover
    image: ghcr.io/mattbun/smtp-pushover
    ports:
      - "25:25"
    environment:
      - PORT=25
      - PUSHOVER_USER="YOUR_PUSHOVER_USER_KEY"
      - PUSHOVER_TOKEN="YOUR_PUSHOVER_API_TOKEN"

Replace YOUR_PUSHOVER_USER_KEY and YOUR_PUSHOVER_API_TOKEN with your actual Pushover User Key and API Token.

Step 3: Start the Service

In the same directory where you created the docker-compose.yml file, open a terminal and run the following command to start the Docker container:

docker-compose up -d smtp-pushover

The -d flag stands for “detached mode,” which will run the container in the background.

Step 4: Sending Emails to Port 25

You’re all set! You can now send emails to the port 25 on your Raspberry Pi. Any emails sent to this port will be forwarded to your Pushover account as notifications.

Conclusion

With just a few simple steps, you’ve set up your own SMTP server using Docker on your Raspberry Pi and configured it to forward emails to your Pushover notifications. This can be a convenient way to receive important alerts and messages directly on your mobile device. Remember to keep your Pushover User Key and API Token secure and never share them publicly. Happy notifying!

For more information and options, you can refer to the official smtp-pushover repository.




Docker, Docker Swarm, and Kubernetes: Unraveling the Differences and Choosing the Ideal Solution for Your Home Lab

Title: Docker, Docker Swarm, and Kubernetes: Unraveling the Differences and Choosing the Ideal Solution for Your Home Lab

Introduction

Containerization has revolutionized the way we deploy, manage, and scale applications. Among the popular container orchestration tools, Docker, Docker Swarm, and Kubernetes stand out as leading solutions. In this blog post, we will explore the key differences between these technologies and help you determine the best and easiest choice for your home lab setup.

  1. Docker: The Foundation of Containerization

Docker is the pioneering platform that brought containerization into the mainstream. It allows developers to package applications and their dependencies into lightweight containers that can run consistently across various environments. Docker offers simplicity and ease of use, making it an excellent choice for individuals new to containerization.

  1. Docker Swarm: Built-in Orchestration for Simplicity

Docker Swarm, on the other hand, is Docker’s native orchestration tool. It enables users to manage multiple Docker containers across multiple hosts, providing essential features like service discovery, load balancing, and automated scaling. Docker Swarm is designed for smaller-scale deployments, making it ideal for home labs or small projects.

  1. Kubernetes: Scalability and Complexity

Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform developed by Google. It is highly robust and scalable, capable of managing large-scale containerized applications across multiple nodes and clusters. While Kubernetes offers unparalleled flexibility and robustness, it comes with increased complexity, making it more suitable for enterprise-level deployments.

  1. Key Differences and Use Cases

  • Docker is the simplest option for containerization, best suited for individual developers or small-scale projects in home labs.
  • Docker Swarm is an easy-to-use orchestration tool that offers basic container management and is suitable for small to medium-sized deployments.
  • Kubernetes shines in large-scale, complex environments, where it can manage hundreds or thousands of containers across multiple clusters.

  1. Best Choice for a Home Lab

For a home lab setup, where simplicity and ease of use are essential, Docker or Docker Swarm are the best choices. Docker provides a straightforward way to containerize applications, while Docker Swarm adds basic orchestration capabilities without introducing significant complexity.

  1. Ease of Use in a Home Lab

Between Docker and Docker Swarm, Docker stands out as the easiest option for a home lab. Its straightforward containerization approach and minimal setup requirements make it accessible to beginners and those looking for a quick and efficient way to deploy applications in their home network.

Conclusion

In conclusion, Docker, Docker Swarm, and Kubernetes are all powerful containerization and orchestration tools with distinct use cases. While Kubernetes excels in large, complex deployments, Docker and Docker Swarm are the optimal choices for home labs due to their simplicity and ease of use.

If you are new to containerization and looking to start your home lab journey, Docker is the perfect entry point. With its user-friendly interface and minimal setup, Docker allows you to focus on deploying and managing applications without being overwhelmed by unnecessary complexities.

Remember, the best choice ultimately depends on your specific requirements and the scale of your projects. Whether you choose Docker or Docker Swarm, rest assured that you are embarking on a journey that empowers you to harness the potential of containerization in your home lab setup.




Installing Yacht – Self-Hosted Web Interface for Docker

Step 1: Create Docker Volume for Yacht

To create a Docker volume for Yacht, use the following command:

docker volume create yacht

Step 2: Install and Run Yacht

To install Yacht and run it in Docker, execute the following command:

docker run -d -p 8000:8000 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v yacht:/config \
  selfhostedpro/yacht

Explanation of options used:

  • -d: Run the container in the background (detached mode).
  • -p 8000:8000: Map port 8000 from the container to port 8000 on the host system. This allows you to access Yacht’s web interface at http://localhost:8000.
  • -v /var/run/docker.sock:/var/run/docker.sock: Mount the Docker socket inside the container to allow Yacht to interact with Docker on the host system.
  • -v yacht:/config: Create a Docker volume named “yacht” and mount it to the “/config” directory inside the container. This volume allows you to persist Yacht’s configuration and data.

Step 3: First Login

Once Yacht is up and running, you can log in to the web interface using the following credentials:

  • URL: http://localhost:8000
  • Username: admin@yacht.local
  • Password: pass

Step 4: Import Template (Optional)

If you wish to import a template for pre-configured containers, use the following URL:
Template URL: https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/yacht/Template/template.json

Conclusion:
By following these steps, you have successfully installed Yacht, a self-hosted web interface for managing Docker containers. You can now access the Yacht web interface and start managing your Docker containers with ease.

Enjoy the benefits of Yacht’s user-friendly interface and simplified Docker container management! Happy sailing!




Setting Up WordPress with MariaDB Using Docker

Introduction:
WordPress is a popular content management system for building websites and blogs. In this guide, we’ll show you how to set up WordPress with MariaDB (MySQL database) using Docker, allowing you to easily create and manage your website.

Step 1: Install and Run MariaDB Container

To install MariaDB and run it in Docker, use the following command:

docker run -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_DATABASE=wordpress \
  --name wordpressdb -v mariadb_data:/var/lib/mysql -d mariadb:latest

Explanation of options used:

  • -e MYSQL_ROOT_PASSWORD=wordpress: Set the root password for the MariaDB database to “wordpress”.
  • -e MYSQL_DATABASE=wordpress: Create a database named “wordpress” for the WordPress installation.
  • --name wordpressdb: Assign the name “wordpressdb” to the MariaDB container for easy management.
  • -v mariadb_data:/var/lib/mysql: Create a Docker volume named “mariadb_data” and mount it to the “/var/lib/mysql” directory inside the container. This volume allows you to persist MariaDB’s data.

Step 2: Install and Run WordPress Container

To install WordPress and run it in Docker, use the following command:

docker run -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=wordpress \
  --name wordpress --link wordpressdb:mysql -p 80:80 -v wordpress_data:/var/www/html -d wordpress:latest

Explanation of options used:

  • -e WORDPRESS_DB_USER=root: Set the WordPress database user to “root” (linked to the MariaDB container).
  • -e WORDPRESS_DB_PASSWORD=wordpress: Set the WordPress database password to “wordpress” (linked to the MariaDB container).
  • --name wordpress: Assign the name “wordpress” to the WordPress container for easy management.
  • --link wordpressdb:mysql: Link the WordPress container to the MariaDB container, allowing WordPress to connect to the database.
  • -p 80:80: Map port 80 from the container to port 80 on the host system. This allows you to access WordPress at http://localhost.
  • -v wordpress_data:/var/www/html: Create a Docker volume named “wordpress_data” and mount it to the “/var/www/html” directory inside the container. This volume allows you to persist WordPress’s data.

Conclusion:
By following these steps and using Docker, you have successfully set up WordPress with MariaDB on your system. You can now access and manage your WordPress website by visiting http://localhost in your web browser.

Enjoy creating and managing your WordPress website with ease using Docker! Happy blogging!




Installing Searxng on ARM64 Architecture with Docker

Introduction:
Searxng is a privacy-friendly and open-source metasearch engine that aggregates search results from various sources. In this guide, we’ll walk you through the process of installing Searxng on an ARM64 architecture using Docker, allowing you to set up your own search engine.

Step 1: Install Searxng with Docker

To install Searxng and run it in Docker on ARM64, execute the following command:

docker run -d -p 8888:8080 \
  --name=searxng \
  -v "searxng_data:/etc/searx" \
  -v "searxng_data:/etc/searxng" \
  -e "BASE_URL=http://192.168.0.224:8888" \
  -e "INSTANCE_NAME=searxng" \
  searxng/searxng:latest

Explanation of options used:

  • -d: Run the container in the background (detached mode).
  • -p 8888:8080: Map port 8080 from the container to port 8888 on the host system. This allows you to access Searxng’s web interface at http://localhost:8888.
  • --name=searxng: Assign the name “searxng” to the container for easy management.
  • -v "searxng_data:/etc/searx": Create a Docker volume named “searxng_data” and mount it to the “/etc/searx” directory inside the container. This volume allows you to persist Searxng’s data and configurations.
  • -e "BASE_URL=http://192.168.0.224:8888": Set the base URL for the Searxng instance. Replace “192.168.0.224” with the IP address or domain name of your server.
  • -e "INSTANCE_NAME=searxng": Specify a custom name for the Searxng instance.

Step 2: Access Searxng Web Interface

Once the Searxng container is up and running, open a web browser and navigate to http://localhost:8888. You will be directed to Searxng’s web interface, where you can perform searches and explore the search engine’s features.

Conclusion:
By following these steps and using Docker, you have successfully installed Searxng on an ARM64 architecture. You now have your own private search engine, Searxng, up and running, allowing you to search the web with enhanced privacy and control.

Enjoy the benefits of Searxng and have fun searching! Happy exploring!




Setting Up Prometheus with Docker for Monitoring

Introduction:
Prometheus is a widely-used open-source monitoring and alerting toolkit. In this guide, we’ll show you how to set up Prometheus using Docker, allowing you to monitor your applications and systems efficiently.

Step 1: Install Prometheus with Docker

To install Prometheus and run it in Docker, execute the following command:

docker run --restart always -d --name prometheus -p 9090:9090 -v prometheus_data:/prometheus prom/prometheus

Explanation of options used:

  • --restart always: Configure the Prometheus container to restart automatically if it stops unexpectedly.
  • -d: Run the container in the background (detached mode).
  • --name prometheus: Assign the name “prometheus” to the container for easy management.
  • -p 9090:9090: Map port 9090 from the container to port 9090 on the host system. This allows you to access Prometheus’s web interface at http://localhost:9090.
  • -v prometheus_data:/prometheus: Create a Docker volume named “prometheus_data” and mount it to the “/prometheus” directory inside the container. This volume allows you to persist Prometheus’s data and configurations.

Step 2: Access Prometheus Web Interface

Once the Prometheus container is up and running, open a web browser and navigate to http://localhost:9090. You will be directed to Prometheus’s web interface, where you can explore and configure the monitoring system.

Step 3: Add Prometheus Data Source

To use Prometheus as a data source in visualization tools like Grafana, follow these steps:

  1. Open Grafana’s web interface, which should be available at http://192.168.0.223:3000 (assuming Grafana is running on port 3000).
  2. Log in to Grafana using your credentials.
  3. Click on the “Configuration” (gear) icon in the sidebar and select “Data Sources.”
  4. Click on “Add data source.”
  5. Choose “Prometheus” from the list of available data sources.
  6. In the “HTTP” section, set the “URL” field to http://localhost:9090 or the IP address of your Prometheus instance.
  7. Optionally, provide a custom name for the data source in the “Name” field.
  8. Click on “Save & Test” to save the data source configuration.

Congratulations! You have successfully set up Prometheus with Docker and added it as a data source in Grafana. You can now use Prometheus to monitor your applications and systems, as well as create dashboards and visualizations in Grafana.

Enjoy the power and flexibility of Prometheus for monitoring and alerting in your Docker environment! Happy monitoring!