Exploring New Horizons: My Transition from Windows 11 to Debian 12 with KDE

In a world dominated by mainstream operating systems and tech giants, it’s refreshing to take a path less traveled. That’s precisely what I did when I decided to leave behind Windows 11 on my laptop and embrace Debian 12 with KDE. This shift was driven by my desire for control, customization, and a touch of curiosity.

Why Debian Over Windows?

  • Autonomy Over Updates: One of my biggest gripes with Windows was its intrusive update system. It seemed like Windows would force restarts at the most inconvenient times, disrupting my workflow. With Debian, I control when updates happen, ensuring they only occur when it’s suitable for me.
  • Privacy Concerns: The increasing integration of cloud services and data collection by big tech companies made me uncomfortable. I was not fond of my data residing in the cloud or being a part of an ecosystem that felt more like a trap than a service.
  • Customization Freedom: KDE on Debian offers an unparalleled level of customization. I can tailor the menus, desktop, and overall interface to match my preferences, making my computing experience genuinely personal.

Embracing the Linux Ecosystem

  • Compatibility Solutions: With tools like Vulkan, Wine, and Steam, I can run almost everything I need on Debian. For software that isn’t currently compatible, I’ve taken a proactive approach by reaching out to companies to request Linux versions of their products.
  • Challenging the Norm: It’s easy to stay comfortable with what’s familiar, but where’s the fun in that? Switching to Linux has reinvigorated my relationship with technology. It’s about learning new skills, solving puzzles when installation issues arise, and genuinely enjoying the process of making my operating system work for me.
  • Performance Considerations: Windows 11 and the upcoming Windows 12 demand increasingly newer hardware, which is not always feasible or desirable. Debian runs smoothly on a wide range of hardware, including older models that might struggle with newer Windows versions.

Conclusion

This journey isn’t just about ditching one operating system for another; it’s about reclaiming the tech space as my own, where I set the rules and boundaries. While Linux isn’t perfect, it’s a step away from the monotony of mainstream operating systems and a step towards something that feels exciting and new. For those tired of the same old routine, maybe it’s time to consider what Debian—or any Linux distribution—can offer you.




Setting Up NTP on Raspberry Pi “Bookworm” for Accurate Timekeeping

Introduction

Accurate timekeeping is essential for various applications, from logging to scheduling tasks. While there are numerous public NTP servers, setting up your own NTP server offers more control and can be quite satisfying. In this blog post, we’ll explore how to configure a Raspberry Pi, affectionately named “Bookworm,” to sync time using an NTP server.

Why Raspberry Pi?

The Raspberry Pi is a versatile and affordable piece of hardware that can handle a variety of tasks, including acting as an NTP client. Its low power consumption and ease of use make it an ideal choice for this purpose.

Prerequisites

  • A Raspberry Pi running a compatible Linux distribution (e.g., Raspbian)
  • Basic knowledge of Linux commands and vi text editor
  • Access to an NTP server (In this example, we use 192.168.0.1)

Step-by-Step Guide

  1. Check Current Time Sync Status
   timedatectl show-timesync --all

This command will display detailed information about the current time synchronization status on your Raspberry Pi.

  1. Edit the Configuration File
   sudo vi /etc/systemd/timesyncd.conf

Open the timesyncd.conf file using the vi text editor with superuser permissions.

  1. Set the NTP Server
   NTP=192.168.0.1

Add this line to the timesyncd.conf file to specify the NTP server you want to use. Save and exit the file.

  1. Restart the Time Sync Service
   sudo systemctl restart systemd-timesyncd

Restart the systemd-timesyncd service to apply the changes.

  1. Verify the Changes
   timedatectl show-timesync --all

Run this command again to confirm that the NTP server is now set to 192.168.0.1.

  1. Check the System Time
   date

Finally, use the date command to display the current system time, ensuring that it is accurate.

Conclusion

Configuring your Raspberry Pi “Bookworm” to use a specific NTP server is a straightforward process that ensures accurate timekeeping. This setup is particularly useful for networks where precise time is crucial. With these simple steps, you can achieve better control and reliability in time synchronization.

I hope you find this guide useful for setting up NTP on your Raspberry Pi “Bookworm.” Feel free to adapt these steps according to your specific needs.




How to Upgrade from Debian Bullseye to Bookworm on Raspberry Pi Without Reinstalling

Upgrading your Raspberry Pi’s operating system doesn’t always require a complete reinstallation. If you’re running Debian Bullseye and want to move to the newer Bookworm version, you can do so with a few simple steps. Before proceeding, make sure to back up all important data.

Step 1: Backup Your Data

First and foremost, backup all your important files. This ensures that you can recover your data in case something goes wrong during the upgrade.

Step 2: Update Current System

Open a terminal and run the following commands to update your existing Bullseye system:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

Step 3: Change Repositories

Edit the /etc/apt/sources.list file and any files in /etc/apt/sources.list.d/ to replace ‘bullseye’ with ‘bookworm’. You can use the nano text editor for this:

sudo nano /etc/apt/sources.list

Step 4: Update Package List

After updating the repositories, run the following command to fetch the new package list:

sudo apt update

Step 5: Upgrade Packages

Execute the following commands to perform the upgrade:

sudo apt upgrade
sudo apt dist-upgrade

Step 6: Clean Up

Remove any obsolete packages to free up space:

sudo apt autoremove
sudo apt clean

Step 7: Reboot

Finally, reboot your Raspberry Pi to complete the upgrade:

sudo reboot

And there you have it! You’ve successfully upgraded from Debian Bullseye to Bookworm on your Raspberry Pi without a complete reinstallation. Always remember to read the specific release notes and upgrade instructions for your situation.


Feel free to use or modify this blog post as you see fit!




How to Backup Docker Data to a Different Location in Your LAN

Prerequisites

  • Docker data located at /var/lib/docker/volumes.
  • SSH access to the target backup system.

Passwordless SSH Login

First, set up passwordless SSH login:

ssh-keygen -t rsa
ssh-copy-id root@192.168.0.225
ssh root@192.168.0.225

Docker Volume Backup Script

Create a backup script named docker_backup.sh:

#!/bin/bash
set -e

# Define variables
source_dir="/var/lib/docker/volumes"
backup_dir="/opt/docker_backups"
keep_backups=10
current_datetime=$(date +"%Y-%m-%d_%H-%M-%S")
backup_filename="$current_datetime-backup.tar"
remote_user="root"
remote_server="192.168.0.225"
remote_dir="/opt/remote_docker_backups"

# Check if source and backup directories exist
if [ ! -d "$source_dir" ]; then
  echo "Source directory does not exist."
  exit 1
fi
if [ ! -d "$backup_dir" ]; then
  echo "Backup directory does not exist."
  exit 1
fi

# Stop running Docker containers
if [ "$(docker ps -q)" ]; then
  docker stop $(docker ps -q)
fi

# Create the backup
tar -cpf "$backup_dir/$backup_filename" "$source_dir"

# Start stopped Docker containers
if [ "$(docker ps -a -q)" ]; then
  docker start $(docker ps -a -q)
fi

# Compress and transfer the backup
gzip "$backup_dir/$backup_filename"
backup_filename="$current_datetime-backup.tar.gz"
scp "$backup_dir/$backup_filename" "$remote_user@$remote_server:$remote_dir"

# Remove old backups
find "$backup_dir" -type f -name "*-backup.tar.gz" -mtime +$keep_backups -exec rm {} \;
ssh "$remote_user@$remote_server" "find $remote_dir -type f -name '*-backup.tar.gz' -mtime +$keep_backups -exec rm {} \;"

echo "Backup was created: $backup_dir/$backup_filename and copied to $remote_server:$remote_dir."

Run the script:

sudo su
chmod +x docker_backup.sh
./docker_backup.sh

Ansible Alternative

Create an Ansible playbook named docker_backup.yml:

---
- name: Docker Backup Playbook
  hosts: rpidocker
  become: yes
  vars:
    source_dir: "/var/lib/docker/volumes"
    backup_dir: "/opt/docker_backups"
    keep_backups: 10
    current_datetime: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
    backup_filename: "{{ current_datetime }}-backup.tar"
    remote_user: "root"
    remote_server: "192.168.0.225"
    remote_dir: "/opt/remote_docker_backups"

  tasks:
    - name: Check if source directory exists
      stat:
        path: "{{ source_dir }}"
      register: source_dir_stat

    - name: Fail if source directory does not exist
      fail:
        msg: "Source directory does not exist."
      when: not source_dir_stat.stat.exists

    - name: Check if backup directory exists
      stat:
        path: "{{ backup_dir }}"
      register: backup_dir_stat

    - name: Fail if backup directory does not exist
      fail:
        msg: "Backup directory does not exist."
      when: not backup_dir_stat.stat.exists

    - name: Stop running Docker containers
      command: docker stop $(docker ps -q)
      ignore_errors: yes

    - name: Create backup archive
      command: tar -cpf "{{ backup_dir }}/{{ backup_filename }}" "{{ source_dir }}"

    - name: Start all Docker containers
      command: docker start $(docker ps -a -q)
      ignore_errors: yes

    - name: Compress the backup archive
      command: gzip "{{ backup_dir }}/{{ backup_filename }}"
      args:
        chdir: "{{ backup_dir }}"

    - name: Copy backup to remote server
      synchronize:
        src: "{{ backup_dir }}/{{ backup_filename }}.gz"
        dest: "{{ remote_user }}@{{ remote_server }}:{{ remote_dir }}"
        mode: push

    - name: Delete older backups locally
      shell: find "{{ backup_dir }}" -type f -name "*-backup.tar.gz" -mtime +{{ keep_backups }} -exec rm {} \;

    - name: Delete older backups on remote server
      shell: ssh "{{ remote_user }}@{{ remote_server }}" "find {{ remote_dir }} -type f -name '*-backup.tar.gz' -mtime +{{ keep_backups }} -exec rm {} \;"

Run the playbook:

ansible-playbook -i inventory.ini docker_backup.yml

Your inventory.ini should look like:

[rpidocker]
192.168.0.224 ansible_user=root ansible_ssh_private_key_file=/path/to/your/private/key

Conclusion

You now have two methods to back up your Docker data securely to another location within your LAN. Choose the one that best fits your needs.




Unleashing the Power of Linux on Single Board Computers: A World of Benefits

Introduction

Single Board Computers (SBCs) have revolutionized the tech landscape, offering compact and affordable computing solutions for a myriad of applications. While Windows is a widely-used operating system, Linux stands out as a compelling alternative for SBCs. In this blog post, we will explore the numerous benefits of using Linux over Windows on Single Board Computers, and why it has become the go-to choice for tech enthusiasts and professionals alike.

  1. Lightweight and Efficient

One of the most significant advantages of using Linux on an SBC is its lightweight and efficient nature. Linux distributions are designed to be resource-friendly, ensuring optimal performance even on low-spec SBCs with limited RAM and processing power. This efficiency allows SBCs to perform tasks seamlessly without unnecessary strain on hardware.

  1. Vast Variety of Distributions

Linux’s open-source nature has given rise to a diverse array of distributions (distros), each tailored to specific use cases and user preferences. From beginner-friendly distros like Ubuntu and Raspberry Pi OS to advanced options like Arch Linux, users have the freedom to choose the one that best suits their needs. This flexibility empowers users to customize their SBC environment as they see fit.

  1. Cost-Effectiveness

Many SBCs are designed to cater to budget-conscious users, and Linux perfectly complements this ethos. As a free and open-source operating system, Linux eliminates the need for expensive licensing fees, making it an economical choice for SBC enthusiasts and projects with tight budgets.

  1. Extensive Community Support

Linux boasts a vibrant and supportive community of developers and enthusiasts. The open-source nature of the operating system encourages collaboration, leading to extensive documentation, tutorials, and forums where users can seek help and share knowledge. This community-driven support makes troubleshooting and learning an enriching experience for SBC users.

  1. Command Line Flexibility

Linux’s command-line interface (CLI) provides unmatched power and flexibility for advanced users. With the CLI, users can perform intricate tasks, automate processes, and fine-tune their SBCs to meet specific requirements. This level of control is invaluable for developers, sysadmins, and tech enthusiasts seeking to explore the full potential of their SBCs.

  1. Enhanced Security

Linux’s security reputation is well-established, especially when compared to other operating systems. The modular design of Linux, combined with frequent security updates and the ability to restrict unnecessary access, fortifies your SBC against potential cyber threats.

  1. Optimized for IoT and Embedded Systems

For IoT (Internet of Things) and embedded system applications, Linux shines as the ideal choice. Its lightweight nature and extensive driver support make it well-suited for resource-constrained devices, enabling seamless integration of SBCs into various IoT projects.

Conclusion

Choosing Linux over Windows for your Single Board Computer brings a myriad of benefits, ranging from performance efficiency and cost-effectiveness to community support and enhanced security. The freedom to customize and tailor your SBC environment, coupled with the vast variety of Linux distributions, empowers users to create a personalized and powerful computing platform.

As the world of technology continues to evolve, Linux remains at the forefront of innovation, making it the ultimate choice for maximizing the potential of Single Board Computers. Whether you are a tech enthusiast, developer, or hobbyist, embracing Linux on your SBC opens up a world of possibilities, where creativity and functionality meet in perfect harmony.




Unleashing the Power of ZimaBoard: Your Ultimate Proxmox Server for Debian, Docker, pfSense, and OPNsense

Title: Unleashing the Power of ZimaBoard: Your Ultimate Proxmox Server for Debian, Docker, pfSense, and OPNsense

Introduction

In the realm of home servers and networking, finding a versatile, cost-effective, and powerful solution can be a daunting task. However, the ZimaBoard stands out as a game-changer, offering a compelling combination of performance and flexibility. In this blog post, we will explore how the ZimaBoard can serve as your Proxmox server for running Debian and Docker, while also doubling as an efficient platform for pfSense and OPNsense, empowering you to take control of your home network like never before.

  1. Introducing ZimaBoard: A Swiss Army Knife for Home Servers

The ZimaBoard is a single-board computer based on the powerful Intel Apollo Lake SoC architecture. Despite its compact form factor, this mini-ITX board packs a punch with its quad-core processor and up to 8GB of RAM. These specifications make it an excellent candidate for various server applications, including virtualization, networking, and containerization.

  1. Proxmox Virtualization: The Foundation of Your Server

Proxmox Virtual Environment is an open-source virtualization platform that allows you to run multiple virtual machines (VMs) and containers on a single host. With the ZimaBoard as your Proxmox server, you can efficiently manage and deploy Debian-based VMs for various purposes, such as web hosting, media servers, or personal development environments.

  1. Harnessing the Power of Docker

Docker has revolutionized software containerization, providing a lightweight and scalable solution for deploying applications. With the ZimaBoard’s ample resources, you can create and manage Docker containers effortlessly. Whether you need a dedicated container for your favorite media server or a secure environment for testing new software, Docker on ZimaBoard offers unmatched versatility.

  1. pfSense and OPNsense: Elevating Your Network Security

Apart from being a powerhouse for virtualization and containerization, the ZimaBoard can transform into a robust network security appliance. By installing either pfSense or OPNsense, both of which are open-source firewall distributions, you can turn your ZimaBoard into a capable and feature-rich home gateway. These firewall solutions offer stateful packet inspection, VPN support, intrusion prevention, and other advanced security features, ensuring your network remains safe from potential threats.

  1. The Beauty of Low Power Consumption

Efficiency is a crucial aspect of any home server or networking device. The ZimaBoard shines in this department, boasting an energy-efficient design that minimizes power consumption without compromising performance. By using the ZimaBoard as your all-in-one server solution, you can save on electricity costs while contributing to a greener environment.

  1. Ease of Setup and Configuration

Setting up the ZimaBoard as your Proxmox server, Debian host, Docker environment, or firewall appliance is remarkably straightforward, thanks to the detailed documentation and active community support. Whether you are a seasoned server administrator or a tech enthusiast exploring the world of home servers, the ZimaBoard offers a user-friendly experience that caters to all skill levels.

Conclusion

The ZimaBoard is a true marvel of modern technology, serving as a Proxmox server for virtualization, a Debian host for various applications, a Docker playground for containerization, and a reliable platform for running pfSense or OPNsense for network security. Its compact form factor, energy efficiency, and impressive performance make it an ideal choice for those seeking a versatile and cost-effective home server solution.

So, why settle for multiple devices when the ZimaBoard can fulfill all your server needs? Embrace the power and flexibility of ZimaBoard, and take charge of your home network and server infrastructure like a pro. Whether you’re a developer, a media enthusiast, or a security-conscious user, the ZimaBoard is the gateway to endless possibilities.




Installing and Configuring Unattended Upgrades on Raspberry Pi

Introduction:
Unattended Upgrades is a package that automates the process of installing security updates on your Raspberry Pi. In this guide, we’ll walk you through the steps to install and configure Unattended Upgrades, ensuring your Raspberry Pi stays up-to-date with the latest patches automatically.

Step 1: Install Unattended Upgrades

To install Unattended Upgrades on your Raspberry Pi, update the package list and install it with the following commands:

sudo apt-get update
sudo apt-get install unattended-upgrades -y

Step 2: Test Unattended Upgrades (Dry Run)

Before enabling Unattended Upgrades, it’s a good idea to test it with a dry run to see what updates would be applied. Use the following command:

sudo unattended-upgrade -d -v --dry-run

This will show you the pending updates without actually installing them.

Step 3: Enable Unattended Upgrades

To enable Unattended Upgrades, use the following command to reconfigure the package with priority set to low:

sudo dpkg-reconfigure --priority=low unattended-upgrades

Follow the prompts to enable automatic updates.

Step 4: Information

With Unattended Upgrades enabled, your Raspberry Pi will now automatically download and install security updates, keeping your system protected with the latest patches.

Conclusion:
By following these steps, you have successfully installed and configured Unattended Upgrades on your Raspberry Pi. This means your Raspberry Pi will now automatically update itself with security patches, ensuring that it remains secure and up-to-date without manual intervention.

Enjoy the peace of mind knowing that your Raspberry Pi is being kept secure and up-to-date automatically! Happy computing!




Setting Up NFS Server on Raspberry Pi

Introduction:
NFS (Network File System) allows you to share directories and files between systems over a network. In this guide, we’ll walk you through setting up an NFS server on your Raspberry Pi and sharing a directory with other systems.

Step 1: Install NFS Server
To install the NFS server package, run the following command:

sudo apt-get install nfs-kernel-server -y

Step 2: Create the NFS Share Directory
Next, create a directory that you want to share with other systems. In this example, we’ll use /mnt/nfsshare:

sudo mkdir /mnt/nfsshare

Step 3: Set Permissions for the NFS Share
For the NFS share to be accessible, we need to set appropriate permissions on the shared directory. In this example, we’ll set the permissions to allow everyone full access. Note that this configuration might not be suitable for production systems.

sudo chmod 777 /mnt/nfsshare

Step 4: Configure the NFS Export
Open the /etc/exports file in a text editor:

sudo vi /etc/exports

Insert the following line at the end of the file to define the NFS share:

/mnt/nfsshare *(rw,no_root_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

Explanation of the options used:

  • *: Allow any client to access the share.
  • rw: Allow read and write access to the share.
  • no_root_squash: Preserve the root user’s identity from the client-side (use with caution).
  • insecure: Allow the use of non-privileged ports for NFS.
  • async: Improve performance by allowing asynchronous writes to the shared directory.
  • no_subtree_check: Disable subtree checking to improve performance.
  • anonuid=1000 and anongid=1000: Map the anonymous user to the UID and GID 1000, which typically represents the first regular user on most systems.

Step 5: Activate the NFS Export
After modifying the /etc/exports file, apply the changes using the following command:

sudo exportfs -ra

Step 6: Start NFS Services
Start the NFS services to make the shared directory accessible:

sudo systemctl start nfs-kernel-server

Step 7: Enable NFS Services on Boot
To ensure the NFS services start automatically on boot, enable them with the following command:

sudo systemctl enable nfs-kernel-server

Conclusion:
Congratulations! You have successfully set up an NFS server on your Raspberry Pi. The /mnt/nfsshare directory is now shared with other systems on the network. You can access this NFS share from other machines using the appropriate mount command.

Please note that for production environments or when sharing with specific clients, you should consider setting more secure and restricted permissions in the /etc/exports file.

Happy sharing!




Automating Neofetch Installation and Configuration using Ansible

Introduction:
Neofetch is a simple and visually appealing command-line system information tool. By using Ansible, we can automate the installation of Neofetch and its configuration, including adding it to the .bashrc file and displaying the Raspberry Pi’s temperature. In this guide, we’ll walk you through creating an Ansible playbook to achieve this automation.

Step 1: Install Ansible
Before proceeding, make sure you have Ansible installed on your system. If you haven’t installed Ansible yet, you can follow the official Ansible installation instructions for your operating system.

Step 2: Create the Ansible Playbook
Create a new YAML file named neofetch_setup.yaml and add the following content to it:

---
- name: Install Neofetch and update .bashrc
  hosts: your_host  # Replace "your_host" with the target host or group where you want to install Neofetch.
  become: yes

  tasks:
    - name: Install Neofetch
      apt:
        name: neofetch
        state: present

    - name: Add Neofetch to .bashrc
      blockinfile:
        path: ~/.bashrc
        block: |
          if [[ -z $(grep -Fxq "neofetch" ~/.bashrc) ]]; then
              echo "neofetch" >> ~/.bashrc
          fi

    - name: Add vcgencmd measure_temp to .bashrc
      blockinfile:
        path: ~/.bashrc
        block: |
          if [[ -z $(grep -Fxq "vcgencmd measure_temp" ~/.bashrc) ]]; then
              echo "vcgencmd measure_temp" >> ~/.bashrc
          fi

In the playbook, make sure to replace “your_host” with the target host or group where you want to install Neofetch. Also, ensure that your target host has the necessary privileges (sudo access) to install packages and modify the .bashrc file.

Step 3: Run the Ansible Playbook
To run the Ansible playbook and install Neofetch on the target host, use the following command:

ansible-playbook neofetch_setup.yaml

Ansible will connect to the target host, install Neofetch, and update the .bashrc file with the appropriate commands.

Step 4: Verify Neofetch Installation
To verify that Neofetch is installed and configured correctly, log in to the target host and open a new terminal. You should see the Neofetch output displaying system information. Additionally, the Raspberry Pi’s temperature will be displayed along with the system details.

Conclusion:
You’ve successfully created an Ansible playbook to automate the installation of Neofetch and its configuration on your Raspberry Pi. Ansible allows you to manage multiple hosts efficiently and ensure consistent setups across your infrastructure. Now you can enjoy using Neofetch to get a stylish system summary each time you open a terminal.

Happy system information tracking!




Running Kali Linux in a Docker Container

Introduction:
Kali Linux is a popular penetration testing and ethical hacking distribution. With Docker, you can run Kali Linux in a containerized environment, making it easy to experiment and perform security assessments. In this guide, we’ll show you how to run Kali Linux as a Docker container and access it interactively.

Step 1: Install Docker
Before proceeding, make sure you have Docker installed on your system. If you haven’t installed Docker yet, follow the official Docker installation instructions for your operating system.

Step 2: Pull Kali Linux Docker Image
To run Kali Linux, pull the latest Kali Linux Docker image from Docker Hub using the following command:

docker pull kalilinux/kali-rolling

Step 3: Create and Start Kali Linux Container
Run the following command to create and start a Kali Linux container:

docker run -d --name=kalilinux -v kalilinux_data:/data --tty --interactive kalilinux/kali-rolling

Explanation of options used:

  • -d: Run the container in the background (detached mode).
  • --name=kalilinux: Assign the name “kalilinux” to the container for easy management.
  • -v kalilinux_data:/data: Create a Docker volume named “kalilinux_data” and mount it to the /data directory inside the container. This volume allows you to persist data and configuration between container restarts.
  • --tty: Allocate a pseudo-TTY, which enables interactive access to the container’s terminal.
  • --interactive: Keep STDIN open even if not attached. This allows you to interact with the container’s terminal.

Step 4: Access the Kali Linux Container
To access the Kali Linux container interactively, use the following command:

docker exec -it kalilinux bash

This command will attach your current terminal to the running Kali Linux container, providing you with an interactive shell inside the container.

Step 5: Clean Up
If you want to stop and remove the Kali Linux container and the associated volume, use the following commands:

docker stop kalilinux
docker rm kalilinux
docker volume rm kalilinux_data

Conclusion:
You’ve successfully run Kali Linux in a Docker container, allowing you to perform security assessments and penetration testing in an isolated environment. Docker enables easy management and deployment of Kali Linux, making it an excellent choice for testing and experimenting with various tools and configurations.

Happy hacking!