Enhancing Docker Swarm Networking with Macvlan

In Docker Swarm, the inability to use the host network directly for stacks presents a challenge for seamless integration into your local LAN. This blog post explores a solution using Macvlan to address this limitation, enabling Docker Swarm stacks to communicate efficiently on your network. We’ll walk through the steps of reserving IP addresses, configuring Macvlan on each node, and deploying a service to utilize these networks.

Reserving IP Addresses in DHCP

For a Docker Swarm cluster, it’s crucial to reserve specific IP addresses within your network to prevent conflicts. Here’s how to approach this task:

  • Network Configuration: Assuming a network range of 192.168.0.0/24 with a gateway at 192.168.0.1.
  • DHCP Server Pool: The existing DHCP server managed by pfSense allocates addresses from 192.168.0.1 to 192.168.0.150.
  • Reserved Range for Docker Swarm: For Macvlan usage, the range from 192.168.0.180 to 192.168.0.204 is reserved, providing 4 addresses per node within a /30 subnet. This setup yields 2 usable IP addresses per node, with one address for the network identification and another for broadcasting.

Node Configuration Overview

Each node is allocated a /30 subnet, as detailed below:

  • Node 1: 192.168.0.180/30 – Usable IPs: 192.168.0.181, 192.168.0.182
  • Node 2: 192.168.0.184/30 – Usable IPs: 192.168.0.185, 192.168.0.186
  • Node 3: 192.168.0.188/30 – Usable IPs: 192.168.0.189, 192.168.0.190
  • Node 4: 192.168.0.192/30 – Usable IPs: 192.168.0.193, 192.168.0.194

Configuring Macvlan on Each Node

To avoid IP address conflicts, it’s essential to define the Macvlan configuration individually for each node:

  1. Create macvlanconfig_swarm in Portainer: For each node, set up a unique Macvlan configuration specifying the driver as Macvlan, the parent interface (e.g., eth0), and the subnet and gateway. Assign each node its /30 subnet range.
  2. Deploy Macvlan as a Service: After configuring each node, create a Macvlan network as a service within your Swarm. This step involves creating a network with the Macvlan driver and linking it to the macvlanconfig_swarm configuration from a manager node.

Deploying Services Using Macvlan

With Macvlan, services like Nginx can be deployed across the Docker Swarm without port redirection, ensuring each instance receives a unique IP address on the LAN. Here’s a Docker Compose example for deploying an Nginx service:

version: '3.8'
services:
  nginx:
    image: nginx:latest
    volumes:
      - type: volume
        source: nginx_data
        target: /usr/share/nginx/html
        volume:
          nocopy: true
    networks:
      - macvlan

volumes:
  nginx_data:
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.0.220,nolock,soft,rw
      device: ":/data/nginx/data"

networks:
  macvlan:
    external: true
    name: "macvlan"

Scaling and Managing Services

As your Docker Swarm grows, each Nginx instance will have its distinct IP in the LAN. To manage these instances effectively, consider integrating an external load balancer. This setup allows for seamless distribution of incoming traffic across all Nginx instances, presenting them as a unified service.

Conclusion

Utilizing Macvlan within a Docker Swarm cluster provides a robust solution for direct LAN communication. By carefully reserving IP ranges and configuring each node with Macvlan, you can ensure efficient network operations. Remember, the deployment of services without port redirection requires careful planning, particularly when scaling, making an external load balancer an essential component of your architecture.




Installing Portainer in Docker Swarm: A Step-by-Step Guide

Portainer is an essential tool for managing your Docker environments, offering a simple yet powerful UI for handling containers, images, networks, and more. Integrating Portainer into your Docker Swarm enhances your cluster’s management, making it more efficient and user-friendly. Here’s a concise guide on installing Portainer within a Docker Swarm setup, leveraging the power of NFS for persistent data storage.

Prerequisites

  • A Docker Swarm cluster is already initialized.
  • NFS server is set up for persistent storage (in this case, at 192.168.0.220).

Step 1: Prepare the NFS Storage

Before proceeding with Portainer installation, ensure you have a dedicated NFS share for Portainer data:

  1. Create a directory on your NFS server (192.168.0.220) that will be used by Portainer: /data/portainer/data.
  2. Ensure this directory is exported and accessible by your Swarm nodes.

Step 2: Create the Portainer Service

The following Docker Compose file is designed for deployment in a Docker Swarm environment and utilizes NFS for storing Portainer’s data persistently.

version: '3.2'

services:
  agent:
    image: portainer/agent:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    networks:
      - agent_network
    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]

  portainer:
    image: portainer/portainer-ee:latest
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    ports:
      - "9000:9000"
      - "8000:8000"
    volumes:
      - type: volume
        source: portainer_data
        target: /data
        volume:
          nocopy: true
    networks:
      - agent_network
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

networks:
  agent_network:
    driver: overlay
    attachable: true

volumes:
  portainer_data:
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.0.220,nolock,soft,rw
      device: ":/data/portainer/data"

Step 3: Deploy Portainer

To deploy Portainer, save the above configuration to a file named portainer-agent-stack.yml. Then, execute the following command on one of your Swarm manager nodes:

docker stack deploy -c portainer-agent-stack.yml portainer

This command deploys the Portainer server and its agent across the Swarm. The agent provides cluster-wide visibility to the Portainer server, enabling management of the entire Swarm from a single Portainer instance.

Step 4: Access Portainer

Once deployed, Portainer is accessible via http://<your-manager-node-ip>:9000. The initial login requires setting up an admin user and password. After logging in, you can connect Portainer to your Docker Swarm environment by selecting it from the home screen.

Conclusion

Integrating Portainer into your Docker Swarm setup provides a robust, web-based UI for managing your cluster’s resources. By leveraging NFS for persistent storage, you ensure that your Portainer configuration and data remain intact across reboots and redeployments, enhancing the resilience and flexibility of your Docker Swarm environment.




Docker Swarm Storage Options: Bind Mounts vs NFS Volume Mounts

When deploying services in a Docker Swarm environment, managing data persistence is crucial. Two common methods are bind mounts and NFS volume mounts. While both serve the purpose of persisting data outside containers, they differ in flexibility, scalability, and ease of management, especially in a clustered setup like Docker Swarm.

Bind Mounts directly link a file or directory on the host machine to a container. This method is straightforward but less flexible when scaling across multiple nodes in a Swarm, as it requires the exact path to exist on all nodes.

NFS Volume Mounts, on the other hand, leverage a Network File System (NFS) to share directories and files across a network. This approach is more scalable and flexible for Docker Swarm, as it allows any node in the swarm to access shared data, regardless of the physical location of the files.

Example: Deploying Nginx with Bind and NFS Volume Mounts

Bind Mount Example:

For a bind mount with Nginx, you’d specify the local directory directly in your Docker Compose file:

services:
  nginx:
    image: nginx:latest
    volumes:
      - /data/nginx/data:/usr/share/nginx/html

This configuration mounts /data/nginx/data from the host to the Nginx container. Note that for this to work in a Swarm, /data/nginx/data must be present on all nodes.

NFS Volume Mount Example:

Using NFS volumes, especially preferred for Docker Swarm setups, you’d first ensure your NFS server (at 192.168.0.220) exports the /data directory. Then, define the NFS volume in your Docker Compose file:

volumes:
  nginx_data:
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.0.220,nolock,soft,rw
      device: ":/data/nginx/data"

services:
  nginx:
    image: nginx:latest
    volumes:
      - nginx_data:/usr/share/nginx/html

This approach mounts the NFS shared directory /data/nginx/data into the Nginx container. It allows for seamless data sharing across the Swarm, simplifying data persistence in a multi-node environment.

Conclusion

Choosing between bind mounts and NFS volume mounts in Docker Swarm comes down to your specific requirements. NFS volumes offer superior flexibility and ease of management for distributed applications, making them the preferred choice for scalable, resilient applications. By leveraging NFS for services like Nginx, you can ensure consistent data access across your Swarm, facilitating a more robust and maintainable deployment.




How to Create a Macvlan Network in Docker Swarm

In a Docker Swarm environment, networking configurations play a critical role in ensuring that services communicate effectively. One such configuration is the Macvlan network, which allows containers to appear as physical devices on your network. This setup can be particularly useful for services that require direct access to a physical network. Here’s a step-by-step guide on how to create a Macvlan network in your Docker Swarm cluster.

Step 1: Define the Macvlan Network on the Master Node

The first step involves defining the Macvlan network on the master node of your Docker Swarm. This network will be used across the entire cluster. To create a Macvlan network, you’ll need to specify a subnet, gateway, parent interface, and make the network attachable. Here’s how you can do it:

docker network create -d macvlan \
  --subnet=192.168.0.0/24 \
  --gateway=192.168.0.1 \
  -o parent=eth0 \
  --attachable \
  swarm-macvlan

This command creates a Macvlan network named swarm-macvlan with a subnet of 192.168.0.0/24, a gateway of 192.168.0.1, and attaches it to the eth0 interface of your Docker host. The --attachable flag allows standalone containers to connect to the Macvlan network.

Step 2: Verify the Macvlan Network Creation

After creating the Macvlan network, it’s important to verify its existence and ensure it’s correctly set up. You can inspect the network details using:

docker network inspect swarm-macvlan

This command provides detailed information about the swarm-macvlan network, including its configuration and any connected containers.

To list all networks and confirm that your Macvlan network is among them, use:

docker network ls

This command lists all Docker networks available on your host, and you should see swarm-macvlan listed among them.

Conclusion

Creating a Macvlan network in Docker Swarm enhances your cluster’s networking capabilities, allowing containers to communicate more efficiently with external networks. By following the steps outlined above, you can successfully set up a Macvlan network and integrate it into your Docker Swarm environment. This setup is particularly beneficial for services that require direct access to the network, providing them with the necessary environment to operate effectively.




Moving Docker Swarm’s Default Storage Location: A Guide

When managing a Docker Swarm, one of the critical aspects you need to consider is where your data resides. By default, Docker uses /var/lib/docker to store its data, including images, containers, volumes, and networks. However, this may not always be the optimal location, especially if you’re working with limited storage space on your system partition or need to ensure data persistence on a more reliable storage medium.

In this blog post, we’ll walk you through the steps to move Docker’s default storage location to a new directory. This process can help you manage storage more efficiently, especially in a Docker Swarm environment where data persistence and storage scalability are crucial.

1. Stop the Docker Server

Before making any changes, ensure that the Docker service is stopped to prevent any data loss or corruption. You can stop the Docker server by running:

sudo systemctl stop docker

2. Edit the Docker Daemon Config

Next, you’ll need to modify the Docker daemon configuration file. This file may not exist by default, but you can create it or edit it if it’s already present:

sudo vi /etc/docker/daemon.json

Inside the file, specify the new storage location using the data-root attribute. For example, to move Docker’s storage to /data, you would add the following configuration:

{
  "data-root": "/data"
}

Save and close the file after making this change.

3. Move the Existing Data

With Docker stopped and the configuration file updated, it’s time to move the existing Docker data to the new location. Use the cp command to copy all the data from the default directory to the new one:

sudo cp -r /var/lib/docker/* /data/

This step ensures that all your existing containers, images, and other Docker data are preserved and moved to the new location.

4. Restart the Docker Server

After moving the data, you’ll need to reload the systemd configuration and restart the Docker service to apply the changes:

sudo systemctl daemon-reload
sudo systemctl restart docker

Conclusion

By following these steps, you’ve successfully moved Docker’s default storage location to a new directory. This change can significantly benefit Docker Swarm environments by improving storage management and ensuring data persistence across your cluster. Always remember to backup your data before performing such operations to avoid any unintentional data loss.




Automating NFS Mounts with Autofs on Raspberry Pi for Docker Swarm

When managing a Docker Swarm on a fleet of Raspberry Pis, ensuring consistent and reliable access to shared storage across your nodes is crucial. This is where Autofs comes into play. Autofs is a utility that automatically mounts network file systems when they’re accessed, making it an ideal solution for managing persistent storage in a Docker Swarm environment. In this blog post, we’ll walk through the process of installing and configuring Autofs on a Raspberry Pi to use with an NFS server for shared storage.

Step 1: Setting Up the NFS Server

Before configuring Autofs, you need an NFS server that hosts your shared storage. If you haven’t already set up an NFS server, you can do so by installing the nfs-kernel-server package on your Raspberry Pi designated as the NFS server:

sudo apt install nfs-kernel-server -y

Then, configure the NFS export by editing the /etc/exports file and adding the following line to share the /data directory:

/data 192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash)

Restart the NFS server to apply the changes:

sudo /etc/init.d/nfs-kernel-server restart

Verify the export with:

sudo exportfs

Step 2: Installing Autofs on Client Raspberry Pis

On each Raspberry Pi client that needs access to the NFS share, install Autofs:

sudo apt update -y
sudo apt install autofs -y

Reboot the Raspberry Pi to ensure all updates are applied:

sudo reboot

Step 3: Configuring Autofs

After installing Autofs, you’ll need to configure it to automatically mount the NFS share. Edit the /etc/auto.master file and add a line for the mount point:

/-    /etc/auto.data --timeout=60

Create and edit /etc/auto.data to specify the NFS share details:

/data -fstype=nfs,rw 192.168.0.220:/data

This configuration tells Autofs to mount the NFS share located at 192.168.0.220:/data to /data on the client Raspberry Pi.

Step 4: Starting and Testing Autofs

Enable and start the Autofs service:

sudo systemctl enable autofs
sudo systemctl start autofs

Check the status to ensure it’s running without issues:

sudo systemctl status autofs

To test, simply access the /data directory on the client Raspberry Pi. Autofs should automatically mount the NFS share.

cd /data
ls

If you see the contents of your NFS share, the setup is successful. Autofs will now manage the mount points automatically, ensuring your Docker Swarm has seamless access to shared storage.

Conclusion

By leveraging Autofs with NFS on Raspberry Pi, you can streamline the management of shared volumes in your Docker Swarm, enhancing both reliability and efficiency. This setup minimizes the manual intervention required for mounting shared storage, making your Swarm more resilient to reboots and network changes. Happy Swarming!




Installing Watchtower on Docker Swarm and Managing Updates with Labels

Docker Swarm offers a streamlined approach to managing containerized applications across multiple hosts. To ensure your applications remain up-to-date without manual intervention, integrating Watchtower into your Docker Swarm setup is a savvy move. Watchtower automates the process of checking for and deploying the latest images for your running containers. However, there may be instances where you wish to exempt specific containers or services from automatic updates. This is achievable through the strategic use of labels. Here’s a concise guide on installing Watchtower on Docker Swarm and leveraging labels to control updates.

Step 1: Deploying Watchtower in Docker Swarm

To begin, you’ll need to create a Docker Compose file for Watchtower. This file instructs Docker Swarm on how to deploy Watchtower correctly. Here’s an example watchtower.yml file designed for Swarm deployment:

version: '3.7'
services:
  watchtower:
    image: containrrr/watchtower
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    command: --interval 30 --label-enable
    deploy:
      placement:
        constraints: [node.role == manager]

This configuration deploys Watchtower to run on a manager node, given its need to access the Docker socket. The --label-enable command ensures Watchtower updates only containers with a specific label indicating they should be watched.

Step 2: Deploying Watchtower Stack

Deploy the Watchtower stack using the following command, ensuring you’re in the directory containing your watchtower.yml:

docker stack deploy -c watchtower.yml watchtower

This command initializes the Watchtower service within your Docker Swarm, setting it to monitor and update containers every 30 seconds.

Step 3: Excluding Containers from Automatic Updates

To exclude specific containers or services from Watchtower updates, utilize the com.centurylinklabs.watchtower.enable label, setting its value to false. This can be done when you first deploy a service or by editing existing services through configuration files or management tools like Portainer.

For a new service, include the label in your Docker Compose file like so:

version: '3.8'
services:
  your_service:
    image: your_image
    deploy:
      labels:
        com.centurylinklabs.watchtower.enable: "false"

For existing containers or services, you can add or modify labels via Portainer’s UI by editing the container or service configuration, allowing for flexible management of your update policies.

Conclusion

Integrating Watchtower into your Docker Swarm infrastructure simplifies the task of keeping containers up-to-date, ensuring your applications benefit from the latest features and security patches. With the added control of exclusion labels, you maintain complete authority over which containers are automatically updated, providing a balance between automation and manual oversight. This setup guarantees a robust, efficient, and up-to-date deployment, minimizing downtime and enhancing security across your Docker Swarm environment.




Simplifying Complex Deployments: The GPT Docker Swarm on Raspberry Pi

Link to my “Docker Swarm” GPT on OpenAI ChatGPT.

In the ever-evolving landscape of technology, combining the power of AI with the flexibility of Docker Swarm on a Raspberry Pi infrastructure presents an innovative approach to scalable and efficient computing solutions. This integration, known as the GPT Docker Swarm, showcases a unique blend of artificial intelligence capabilities with robust, decentralized computing power, tailored specifically for environments demanding both intelligence and adaptability.

The Hardware Foundation

At the core of the GPT Docker Swarm is a quartet of Raspberry Pi 4B units, each boasting 8GB of RAM and 256GB of local storage via m.2 over USB3. This hardware setup is meticulously organized into three master nodes (RPT1, RPT2, RPT3) and one node (RPT4), ensuring redundancy and efficient load distribution among the units. The choice of Raspberry Pi 4B underscores the project’s commitment to combining cost-effectiveness with powerful computing capabilities.

Software and Configuration

Running Raspbian Bookworm lite 64bit (Debian 12) ARM64, the setup is optimized for headless access with SSH, underpinning the system’s focus on security and remote manageability. Key software components include Docker-compose for container orchestration, NFS server for centralized data storage, and Autofs for efficient storage mounting across the nodes. Additionally, Neofetch provides real-time system information, including CPU temperature, ensuring the system’s health is always monitored.

Unattended updates ensure the system remains secure and up-to-date without manual intervention. Special configurations for power management and memory sharing highlight the project’s attention to detail in optimizing performance and reliability.

Swarm Configuration and Power Management

The GPT Docker Swarm configuration includes innovative solutions for power management, allowing for centralized control over the power states of all nodes. This feature is particularly useful in scenarios where power efficiency and quick system restarts are crucial.

Application Deployment and Management

Leveraging Portainer, the GPT Docker Swarm simplifies the deployment and management of services. This approach not only facilitates the use of ARM64-compatible Docker images but also emphasizes persistent data storage by binding service-specific data to the “/data” directory on the master node. This method ensures data persistence and simplifies the management of services like Nginx, demonstrating the system’s adaptability to various application needs.

Conclusion

The GPT Docker Swarm represents a forward-thinking solution that marries the simplicity and cost-effectiveness of Raspberry Pi hardware with the sophistication of Docker container orchestration. This setup is a testament to the versatility and power of combining open-source technologies to create a resilient, scalable, and efficient computing environment suitable for a wide range of applications, from home labs to educational environments and beyond.




How to Backup Docker Data to a Different Location in Your LAN

Prerequisites

  • Docker data located at /var/lib/docker/volumes.
  • SSH access to the target backup system.

Passwordless SSH Login

First, set up passwordless SSH login:

ssh-keygen -t rsa
ssh-copy-id root@192.168.0.225
ssh root@192.168.0.225

Docker Volume Backup Script

Create a backup script named docker_backup.sh:

#!/bin/bash
set -e

# Define variables
source_dir="/var/lib/docker/volumes"
backup_dir="/opt/docker_backups"
keep_backups=10
current_datetime=$(date +"%Y-%m-%d_%H-%M-%S")
backup_filename="$current_datetime-backup.tar"
remote_user="root"
remote_server="192.168.0.225"
remote_dir="/opt/remote_docker_backups"

# Check if source and backup directories exist
if [ ! -d "$source_dir" ]; then
  echo "Source directory does not exist."
  exit 1
fi
if [ ! -d "$backup_dir" ]; then
  echo "Backup directory does not exist."
  exit 1
fi

# Stop running Docker containers
if [ "$(docker ps -q)" ]; then
  docker stop $(docker ps -q)
fi

# Create the backup
tar -cpf "$backup_dir/$backup_filename" "$source_dir"

# Start stopped Docker containers
if [ "$(docker ps -a -q)" ]; then
  docker start $(docker ps -a -q)
fi

# Compress and transfer the backup
gzip "$backup_dir/$backup_filename"
backup_filename="$current_datetime-backup.tar.gz"
scp "$backup_dir/$backup_filename" "$remote_user@$remote_server:$remote_dir"

# Remove old backups
find "$backup_dir" -type f -name "*-backup.tar.gz" -mtime +$keep_backups -exec rm {} \;
ssh "$remote_user@$remote_server" "find $remote_dir -type f -name '*-backup.tar.gz' -mtime +$keep_backups -exec rm {} \;"

echo "Backup was created: $backup_dir/$backup_filename and copied to $remote_server:$remote_dir."

Run the script:

sudo su
chmod +x docker_backup.sh
./docker_backup.sh

Ansible Alternative

Create an Ansible playbook named docker_backup.yml:

---
- name: Docker Backup Playbook
  hosts: rpidocker
  become: yes
  vars:
    source_dir: "/var/lib/docker/volumes"
    backup_dir: "/opt/docker_backups"
    keep_backups: 10
    current_datetime: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
    backup_filename: "{{ current_datetime }}-backup.tar"
    remote_user: "root"
    remote_server: "192.168.0.225"
    remote_dir: "/opt/remote_docker_backups"

  tasks:
    - name: Check if source directory exists
      stat:
        path: "{{ source_dir }}"
      register: source_dir_stat

    - name: Fail if source directory does not exist
      fail:
        msg: "Source directory does not exist."
      when: not source_dir_stat.stat.exists

    - name: Check if backup directory exists
      stat:
        path: "{{ backup_dir }}"
      register: backup_dir_stat

    - name: Fail if backup directory does not exist
      fail:
        msg: "Backup directory does not exist."
      when: not backup_dir_stat.stat.exists

    - name: Stop running Docker containers
      command: docker stop $(docker ps -q)
      ignore_errors: yes

    - name: Create backup archive
      command: tar -cpf "{{ backup_dir }}/{{ backup_filename }}" "{{ source_dir }}"

    - name: Start all Docker containers
      command: docker start $(docker ps -a -q)
      ignore_errors: yes

    - name: Compress the backup archive
      command: gzip "{{ backup_dir }}/{{ backup_filename }}"
      args:
        chdir: "{{ backup_dir }}"

    - name: Copy backup to remote server
      synchronize:
        src: "{{ backup_dir }}/{{ backup_filename }}.gz"
        dest: "{{ remote_user }}@{{ remote_server }}:{{ remote_dir }}"
        mode: push

    - name: Delete older backups locally
      shell: find "{{ backup_dir }}" -type f -name "*-backup.tar.gz" -mtime +{{ keep_backups }} -exec rm {} \;

    - name: Delete older backups on remote server
      shell: ssh "{{ remote_user }}@{{ remote_server }}" "find {{ remote_dir }} -type f -name '*-backup.tar.gz' -mtime +{{ keep_backups }} -exec rm {} \;"

Run the playbook:

ansible-playbook -i inventory.ini docker_backup.yml

Your inventory.ini should look like:

[rpidocker]
192.168.0.224 ansible_user=root ansible_ssh_private_key_file=/path/to/your/private/key

Conclusion

You now have two methods to back up your Docker data securely to another location within your LAN. Choose the one that best fits your needs.




Setting Up Omada Controller on a Raspberry Pi 4 with Docker

Introduction

Managing TP-Link EAP devices becomes a breeze when you have a centralized controller. In this guide, we’ll walk through the steps to set up an Omada Controller on a Raspberry Pi 4 using Docker. This is an excellent solution for both home and small business networks.

Prerequisites

  • Raspberry Pi 4 with 4GB RAM
  • Docker installed on the Raspberry Pi
  • SSH access to the Raspberry Pi

Step-by-Step Guide

Step 1: SSH into Your Raspberry Pi

First, connect to your Raspberry Pi using SSH. This will allow you to execute commands remotely.

Step 2: Pull the Omada Controller Docker Image

Run the following command to pull the latest Omada Controller Docker image:

docker pull mbentley/omada-controller:latest

Step 3: Create Data Directories

Create directories to store Omada Controller’s data and work files:

mkdir -p /opt/tplink/OmadaController/data
mkdir -p /opt/tplink/OmadaController/work

Step 4: Run the Omada Controller Container

Execute the following command to run the Omada Controller container:

docker run -d \
  --name omada-controller \
  --restart unless-stopped \
  -e TZ='Europe/Copenhagen' \
  -e SMALL_FILES=false \
  -p 8088:8088 \
  -p 8043:8043 \
  -p 27001:27001/udp \
  -p 27002:27002 \
  -p 29810:29810/udp \
  -p 29811:29811 \
  -p 29812:29812 \
  -p 29813:29813 \
  -v /opt/tplink/OmadaController/data:/opt/tplink/EAPController/data \
  -v /opt/tplink/OmadaController/work:/opt/tplink/EAPController/work \
  mbentley/omada-controller:latest

Step 5: Access the Omada Controller

Finally, open a web browser and navigate to https://<Raspberry_Pi_IP>:8043. Follow the setup wizard to complete the installation.

Note: Replace <Raspberry_Pi_IP> with the actual IP address of your Raspberry Pi.

Conclusion

You’ve successfully set up an Omada Controller on your Raspberry Pi 4 using Docker. This will help you manage your TP-Link EAP devices efficiently. If you have any questions or run into issues, feel free to reach out.


Feel free to add this to your homepage, and let me know if you’d like any adjustments.