How I’m Winning with Windows 11 (without the nags)

Windows 11 is ideal for multifunctional use – Office, Games, WSL, HW Options Kazoo – but the built-in defaults slow me down and get sooo annoying. These tweaks make it fast, clean, and predictable:

  • Windhawk mods for the stuff Microsoft won’t expose:
    Taskbar Clock Customization (rich clock/date formats), Better File Sizes in Explorer (human-readable sizes), and Taskbar Icon Spacing/Size (tight or roomy as you like). Windhawk

  • Everything + Everything Toolbar for instant file search from the taskbar/start area. Windows Search sleeps; Everything sprints. Voidtools

  • Start11 to restore a sane Start Menu—and wire it to Everything so Start menu searches are local, fast, and ad-free. Stardock

  • AutoHotkey to supercharge virtual desktops:
    ALT+1..9 jumps to a desktop; SHIFT+ALT+1..9 moves the focused window there. It’s a perfect “almost-tiling” workflow without the rigidity of a tiling WM. My keymaps live here: https://github.com/ske5074/windows-desktop-switcher . AutoHotkey  (Be sure to use the 1.x version of AutoHotKey)

  • Twinkle Tray for one-click monitor brightness (and quick volume), right from the tray—especially handy with multi-monitor setups. Twinkle Tray

Net result: a quiet, fast Windows 11 desktop that works the way I do—no Edge promos, no Start menu fluff, and muscle-memory moves between clean, purpose-built desktops.


References / Links

Upgrade Proxmox ZFS boot drive with mirroring

From https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#chapter_zfs

# zpool status
pool: rpool
state: ONLINE
scan: resilvered 9.32M in 00:00:00 with 0 errors on Thu Apr 3 23:20:51 2025
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.0025388581b66796-part3 ONLINE 0 0 0
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.8T 0 disk
--sda1 8:1 0 1.8T 0 part
zd16 230:16 0 32G 0 disk
--zd16p1 230:17 0 100M 0 part
--zd16p2 230:18 0 892M 0 part
--zd16p3 230:19 0 31G 0 part
zd32 230:32 0 10G 0 disk
--zd32p1 230:33 0 9.5G 0 part
--zd32p2 230:34 0 1K 0 part
--zd32p5 230:37 0 510M 0 part
nvme0n1 259:0 0 476.9G 0 disk
--nvme0n1p1 259:1 0 1007K 0 part
--nvme0n1p2 259:2 0 1G 0 part
--nvme0n1p3 259:3 0 475.9G 0 part

Duplicate the partition tables on the new drive (/dev/sda)

# sgdisk /dev/nvme0n1 -R /dev/sda

change the GUID so they are not the same

# sgdisk -G /dev/sda

Use parted to fdisk to expand partition 3 into the full capacity of the new disk

# fdisk /dev/sda

Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 126F0F8E-624E-4F4D-8CD4-89F8B2EDE74A

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 2099199 2097152 1G EFI System
/dev/sda3 2099200 1000215182 998115983 475.9G Solaris /usr & Apple ZFS

Command (m for help): d
Partition number (1-3, default 3): 3

Partition 3 has been deleted.

Command (m for help): n
Partition number (3-128, default 3): 3
First sector (2099200-3907029134, default 2099200):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-3907029134, default 3907028991):

Created a new partition 3 of type 'Linux filesystem' and of size 1.8 TiB.

Command (m for help): p
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 126F0F8E-624E-4F4D-8CD4-89F8B2EDE74A

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 2099199 2097152 1G EFI System
/dev/sda3 2099200 3907028991 3904929792 1.8T Linux filesystem

Command (m for help):

Change the label back to “Solaris /usr & Apple ZFS”

Command (m for help): t
Partition number (1-3, default 3): 3
Partition type or alias (type L to list all): 157

Changed type of partition 'Linux filesystem' to 'Solaris /usr & Apple ZFS'.

Command (m for help): p
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 126F0F8E-624E-4F4D-8CD4-89F8B2EDE74A

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 2099199 2097152 1G EFI System
/dev/sda3 2099200 3907028991 3904929792 1.8T Solaris /usr & Apple ZFS

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Check what type of boot partition you have (Grub / EUFI)

# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
F0A5-6343 is configured with: uefi (versions: 6.8.12-4-pve, 6.8.12-9-pve)

Copy boot partition detail to the new disk

# proxmox-boot-tool format /dev/sda2
UUID="" SIZE="1073741824" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Formatting '/dev/sda2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.
# proxmox-boot-tool init /dev/sda2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="F84D-06C6" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Mounting '/dev/sda2' on '/var/tmp/espmounts/F84D-06C6'.
Installing systemd-boot..
Created "/var/tmp/espmounts/F84D-06C6/EFI/systemd".
Created "/var/tmp/espmounts/F84D-06C6/EFI/BOOT".
Created "/var/tmp/espmounts/F84D-06C6/loader".
Created "/var/tmp/espmounts/F84D-06C6/loader/entries".
Created "/var/tmp/espmounts/F84D-06C6/EFI/Linux".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/F84D-06C6/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/F84D-06C6/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/F84D-06C6/loader/random-seed successfully written (32 bytes).
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/sda2'.
Adding '/dev/sda2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Copying and configuring kernels on /dev/disk/by-uuid/F0A5-6343
Copying kernel and creating boot-entry for 6.8.12-4-pve
Copying kernel and creating boot-entry for 6.8.12-9-pve
Copying and configuring kernels on /dev/disk/by-uuid/F84D-06C6
Copying kernel and creating boot-entry for 6.8.12-4-pve
Copying kernel and creating boot-entry for 6.8.12-9-pve

Add the new disk to rpool as a mirror device. Important – you have to use partition 3, not the just the disk designation.

# zpool status
pool: rpool
state: ONLINE
scan: resilvered 9.32M in 00:00:00 with 0 errors on Thu Apr 3 23:20:51 2025
config:

NAME                            STATE READ WRITE CKSUM
rpool                           ONLINE 0 0 0
nvme-eui.0025388581b66796-part3 ONLINE 0 0 0

errors: No known data errors

# zpool attach rpool nvme-eui.0025388581b66796-part3 /dev/sda3

# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Apr 4 03:35:18 2025
378G / 378G scanned, 853M / 378G issued at 35.5M/s
841M resilvered, 0.22% done, 03:01:01 to go
config:

NAME                              STATE READ WRITE CKSUM
rpool                             ONLINE 0 0 0
mirror-0                          ONLINE 0 0 0
  nvme-eui.0025388581b66796-part3 ONLINE 0 0 0
  sda3                            ONLINE 0 0 0 (resilvering)

errors: No known data errors
# zpool status
pool: rpool
state: ONLINE
scan: resilvered 371G in 03:10:09 with 0 errors on Sat Apr 5 11:24:50 2025
config:

NAME                              STATE READ WRITE CKSUM
rpool                             ONLINE 0 0 0
mirror-0                          ONLINE 0 0 0
  nvme-eui.0025388581b65b82-part3 ONLINE 0 0 0
  sda3                            ONLINE 0 0 0

Once synced up, Power off and replace the old drive with the new drive.  System should still boot if UEFI.

Once booted, you’ll have a degraded mirror,  you can safely remove the old drive

# zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: resilvered 371G in 03:10:09 with 0 errors on Sat Apr 5 11:24:50 2025
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
14929228184534084580 UNAVAIL 0 0 0 was /dev/disk/by-id/nvme-eui.0025388581b65b82-part3
nvme0n1p3 ONLINE 0 0 0

errors: No known data errors

# zpool detach rpool 14929228184534084580

# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 472G 367G 105G - 1.35T 21% 77% 1.00x ONLINE -

If you expanded partition 3 to be larger, tell zfs that you want to expand into the available space.

# zpool set autoexpand=on rpool
# zpool online -e rpool nvme0n1p3
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 1.82T 367G 1.46T - - 5% 19% 1.00x ONLINE -



Proxmox GPU Passthrough for LXC for Docker, for apps WebODM, immich

Proxmox GPU Passthrough for Docker using LXC to host WebODM with ClusterODM

References:


Remove Old NVIDIA Drivers

  1. List existing NVIDIA or CUDA packages:
    apt list --installed | egrep -i "nvidia|cuda" | cut -d/ -f1
  2. If drivers are listed, uninstall the current NVIDIA runfile driver:
    sudo ./NVIDIA-Linux-*.run --uninstall
  3. Re-check installed packages:
    apt list --installed | egrep -i "nvidia|cuda" | cut -d/ -f1
  4. If any packages remain, remove them:
    apt list --installed | egrep -i "nvidia|cuda" | cut -d/ -f1 | xargs apt remove -y

Setting Up GPU Passthrough on Proxmox Server

  1. Install required packages:
    apt install pve-headers dkms pciutils
  2. Edit /etc/default/grub and update:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
  3. Update grub:
    update-grub2
  4. Blacklist default GPU drivers:
    echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
    echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
    echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
  5. Add to /etc/modules:
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
  6. Update initramfs:
    update-initramfs -u -k all
  7. Reboot the Proxmox server.
  8. Download NVIDIA driver (from NVIDIA Drivers). This document uses the official NVIDIA driver runfile. The distro version can break with apt updates.
    Example:

    NVIDIA-Linux-x86_64-570.133.07.run
  9. Set the installer as executable:
    chmod +x NVIDIA-Linux-*.run
  10. Run the installer:
    ./NVIDIA-Linux-*.run
  11. Reboot the Proxmox server again.
  12. Check installation:
    nvidia-smi
  13. Check NVIDIA device IDs:
    ls -al /dev/nvidia*
    Example output:

    crw-rw-rw- 1 root root 195,   0 /dev/nvidia0
    crw-rw-rw- 1 root root 195, 255 /dev/nvidiactl
    crw-rw-rw- 1 root root 509,   0 /dev/nvidia-uvm
    crw-rw-rw- 1 root root 509,   1 /dev/nvidia-uvm-tools

    Note device IDs like 195, 235, 255, 509.

  14. Edit LXC config file at /etc/pve/lxc/<ID>.conf:
    lxc.cgroup2.devices.allow: c 195:* rwm
    lxc.cgroup2.devices.allow: c 235:* rwm
    lxc.cgroup2.devices.allow: c 255:* rwm
    lxc.cgroup2.devices.allow: c 509:* rwm
    lxc.mount.entry: /dev/nvidia0 /dev/nvidia0 none bind,optional,create=file
    lxc.mount.entry: /dev/nvidiactl /dev/nvidiactl none bind,optional,create=file
    lxc.mount.entry: /dev/nvidia-modeset /dev/nvidia-modeset none bind,optional,create=file
    lxc.mount.entry: /dev/nvidia-uvm /dev/nvidia-uvm none bind,optional,create=file
    lxc.mount.entry: /dev/nvidia-uvm-tools /dev/nvidia-uvm-tools none bind,optional,create=file
  15. Check nvidia-smi again:
    nvidia-smi

Set Up LXC Container for Docker

  1. Install tools:
    apt install pciutils
  2. Install NVIDIA driver again with:
    ./NVIDIA-Linux-*.run --no-kernel-modules
  3. Install APT prerequisites:
    apt update
    apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
  4. Add NVIDIA APT repo:
    curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
    && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
  5. Update package list:
    apt update
  6. Install NVIDIA Container Toolkit:
    apt install -y nvidia-container-toolkit
  7. Configure Docker to use NVIDIA runtime:
    sudo nvidia-ctk runtime configure --runtime=docker
  8. Restart Docker:
    sudo systemctl restart docker
  9. Verify GPU inside container:
    nvidia-smi
    Example output:

    +-----------------------------------------------------------------------------------------+
    | NVIDIA-SMI 570.133.07             Driver Version: 570.133.07     CUDA Version: 12.8     |
    |-----------------------------------------+------------------------+----------------------+
    | GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
    |                                         |                        |               MIG M. |
    |=========================================+========================+======================|
    |   0  Quadro P600                    On  |   00000000:01:00.0 Off |                  N/A |
    |  0%   37C    P8            N/A  /  N/A  |       3MiB /   2048MiB |      0%      Default |
    +-----------------------------------------+------------------------+----------------------+
    | Processes:                                                                              |
    |  No running processes found                                                             |
    +-----------------------------------------------------------------------------------------+

Reference: NVIDIA Container Toolkit Installation Guide


WebODM + ClusterODM (Docker Setup)

docker run -d --rm -ti -p 3000:3000 -p 10000:10000 -p 8080:8080 opendronemap/clusterodm

Run on the worker node:

docker run -d -p 3001:3000 opendronemap/nodeodm:gpu --gpus all --restart always
./webodm.sh start --default-nodes 0 --detached --port 80

Connect Node in WebODM UI:
Go to: http://10.0.1.131:10000
Add Node: 10.0.1.131:3001


Configure Immich for GPU

(Adapted from: Immich Docs)

  1. Download the latest hwaccel.ml.yml file and place it in the same folder as docker-compose.yml.
  2. In docker-compose.yml, under immich-machine-learning, uncomment the extends section and change cpu to the appropriate backend.
  3. Also in immich-machine-learning, add one of: [armnn, cuda, rocm, openvino, rknn] to the image tag.
  4. Redeploy the immich-machine-learning container with the updated settings.

Using Fit-statUSB to provide visual server health in my homelab using proxmox

https://www.amazon.com/Compulab-FIT-STATUSB-fit-statUSB/dp/B07CKFLQ5V
#!/bin/bash

# Define color variables
BLUE="000011"
RED="010000"
YELLOW="050500"
GREEN="000100"
WHITE="111111"
OFF="000000"
DECAY="#FF0000"

# Initialize the serial port
usbreset fit_StatUSB
if [ $? -ne 0 ]; then
    echo "Device not found. Aborting."
    exit 1
fi

sleep 5
stty -F /dev/ttyACM0 9600 raw -echo -echoe -echok -echoctl -echoke
sleep 5

# Function to send a color command to Fit-statUSB
send_color() {
    echo Sending: "B#${1}-250#000000-1000${DECAY}-9999"
    echo -e "B#${1}-250#000000-1000${DECAY}-9999" > /dev/ttyACM0
    sleep 1
}

echo -e "F0001"     > /dev/ttyACM0; sleep 1 # Minimal Transition
echo -e "#${RED}"   > /dev/ttyACM0; sleep 1 # Red
echo -e "#${GREEN}" > /dev/ttyACM0; sleep 1 # Green
echo -e "#${BLUE}"  > /dev/ttyACM0; sleep 1 # Blue
echo -e "#${WHITE}" > /dev/ttyACM0; sleep 1 # White
echo -e "#${OFF}"   > /dev/ttyACM0; sleep 1 # Off

while true; do
    # Get processor idle time using vmstat
    idle=$(vmstat 1 2 | tail -1 | awk '{print $15}')

    # Get Proxmox health state
    expected_votes=$(pvecm status | grep 'Expected votes:' | awk '{print $2}')
    total_votes=$(pvecm status | grep 'Total votes:' | awk '{print $2}')
    flags=$(pvecm status | grep 'Flags:' | awk '{print $2}')

    if [ "$flags" != "Quorate" ]; then
        proxmox_status="$RED"
    elif [ "$expected_votes" != "$total_votes" ]; then
        proxmox_status="$YELLOW"
    else
        proxmox_status="$GREEN"
    fi

    # Check network connectivity
    if ping -c 1 8.8.8.8 &> /dev/null; then
        network_status="$GREEN"
    else
        network_status="$RED"
    fi

    # Determine processor state color
    if [ "$idle" -lt 10 ]; then
        processor_status="$RED"
    elif [ "$idle" -lt 20 ]; then
        processor_status="$YELLOW"
    else
        processor_status="$GREEN"
    fi

    # Repeat the sequences 6 times before re-evaluating the system state
    for i in {1..6}; do
        # Create and send blink sequence with breaks
        send_color "$BLUE" # Initial Blue indicating start of the dataset
        send_color "$processor_status"
        send_color "$proxmox_status"
        send_color "$network_status"
    done
done

Updated Homelab using M910Qs and P320s

Recently, I gave my homelab a fresh upgrade by adding Lenovo ThinkCentre M910Q Tiny systems and a few P320s equipped with Nvidia Quadro P600 video cards. These systems are compact yet powerful, documented to support up to 32GB of RAM each—but with a bit of tweaking, they can handle an impressive 64GB! They might not be the most powerful setups out there, but with their small form factor and affordability, they make fantastic little Proxmox machines, offering big potential in a small footprint.

Used PC4-21300 2666MHz CL19 32GB SODIMMs for memory,  with Intel Core I7 CPUs

Armbian custom builds for different SOCs, using docker

I’m impressed with how well Armbian works with SOCs. Since I couldn’t find a Raspberry Pi recently, I tried out a “LePotato” board, which has performed well overall. The main issue is the lack of a headless install option for Armbian. Without a FAT partition for /boot, configuring the OS on a PC or Mac before installation is challenging.

I attempted adding /boot to an existing image but struggled. Eventually, I found I could compile Armbian with a revised lepotato.conf file by adding BOOTFS_TYPE="fat". Typically, compiling OS builds requires specific hardware, compilers, libraries, etc., making it a hassle. However, Armbian’s DIY-focused approach made the process surprisingly easy. I even used Docker, so my main OS stayed clean—very cool indeed!

 

From: https://docs.armbian.com/Developer-Guide_Building-with-Docker/

From the docker host:

# apt-get -y -qq install git
# git clone --depth 1 https://github.com/armbian/build 
# cd build
# ./compile.sh docker-shell 

You’ll end up in the docker container. Now run compile.sh again. Notice you are in the container now…

root@09c6235bb6ee:~/armbian# ./compile.sh BOARD=lepotato RELEASE=bullseye BRANCH=current KERNEL_CONFIGURE=yes

I also tried adding BOOTFS_TYPE=”fat” to the conf file so I could see the boot files on a PC beforehand:

root@09c6235bb6ee:~/armbian/build/config/boards# more nanopineoplus2.conf
#Allwinner H5 quad core 1GB RAM SoC headless GBE eMMC WiFi/BTBOARD_NAME="NanoPi Neo Plus 2"
BOARDFAMILY="sun50iw2"
BOOTCONFIG="nanopi_neo_plus2_defconfig"
MODULES="g_serial"
MODULES_BLACKLIST="lima"
DEFAULT_OVERLAYS="usbhost1 usbhost2"
DEFAULT_CONSOLE="serial"
SERIALCON="ttyS0,ttyGS0"
HAS_VIDEO_OUTPUT="no"
KERNEL_TARGET="legacy,current,edge"
BOOTFS_TYPE="fat"
root@09c6235bb6ee:~/armbian# ./compile.sh BOARD=nanopineoplus2 RELEASE=bullseye BRANCH=current KERNEL_CONFIGURE=yes

LePotato and NanoPi Neo Plus2 Goodness

Move OS to the NanoPi Neo Plus2 8GB emmc:

Get an Armbian Image for the NanoPi and boot it from the microSD. After initially configuring, run /sbin/nand-sata-install. Follow the prompts to copy the root filesystem to the emmc. Remove the SD card.

Installing DietPi to the 8GB eMMC flash on the NanoPi:

Booted into Armbian on the internal SD card and put The dietPi image in a USB to microSD dongle. The DietPi image was small, so I chose to create an fsarchiver image of it on the Armbian filesystem temporarily. This may or may not work for you if you don’t have enough space on the booted OS in the internal microSD card.

Use fdisk to see where the external microSD is and the eMMC. For me it was the following:

# fdisk -l
Disk /dev/mmcblk2: 7.28 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbc471ea2

Device         Boot Start      End  Sectors  Size Id Type
/dev/mmcblk2p1       8192 15106047 15097856  7.2G 83 Linux


Disk /dev/sda: 14.63 GiB, 15707668480 bytes, 30679040 sectors
Disk model: microSD RDR
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x74239350

Device     Boot Start      End  Sectors  Size Id Type
/dev/sda1        8192 30343168 30334977 14.5G 83 Linux


Disk /dev/mmcblk0: 59.48 GiB, 63864569856 bytes, 124735488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6337f038

Device         Boot Start      End  Sectors  Size Id Type
/dev/mmcblk0p1       2048 31110143 31108096 14.8G 83 Linux
#

It translated to this:

/dev/mmcblk0p1 — The Internal microSD Armbian boot device.
/dev/sda1 — The SD card with a DietPI image on the microSD
/dev/mmcblk2p — The eMMC (7.28 GiB)

Now create a fsarchiver backup of the DietPi image on /dev/sda1

# fsarchiver savefs -A -j4 -o /DietPi.fsa /dev/sda

Create a new ext4 partition on the eMMC drive using fdisk

# fdisk /dev/mmcblk2

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/mmcblk2: 7.28 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbc471ea2

Device         Boot Start      End  Sectors  Size Id Type
/dev/mmcblk2p1       8192 15106047 15097856  7.2G 83 Linux

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-15269887, default 2048): 8192
Last sector, +/-sectors or +/-size{K,M,G,T,P} (8192-15269887, default 15269887):

Created a new partition 1 of type 'Linux' and of size 7.3 GiB.

Command (m for help): p
Disk /dev/mmcblk2: 7.28 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbc471ea2

Device         Boot Start      End  Sectors  Size Id Type
/dev/mmcblk2p1       8192 15269887 15261696  7.3G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
# 

Format the eMMC partition

# mkfs -t ext4 /dev/mmcblk2p1

Restore the fsarchiver image to the new partition:

# fsarchiver restfs /DietPi.fsa id=0,dest=/dev/sdb2

* files successfully processed:....regfiles=59379, directories=6999, symlinks=5774, hardlinks=331, specials=80
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0

Finally -run nand-sata-install to get the boot record on the eMMC:

# /sbin/nand-sata-install

Once you say yes, it will execute and exit. Now poweroff, removed the SD cards, and see if it works!

# poweroff  

Unplug the nanoPi and Plug it back in. Hopefully it worked! Remember – the root password may be different now since you’re booting off the internal eMMC with DietPi

 

Octoprint container in Debian Windows WSL 2 and Docker Desktop

Here’s a list of steps to get octoprint to run within a container on Windows. I happen to have a windows system running next to my ender so instead of infinitely waiting for a raspberry pi I decided to run octoprint in a container within windows – if possible. Using Debian was a challenge, but I prefer it over Ubuntu, so I took the extra time to figure it out. Enjoy!

Get USB serial device into Debian

PowerShell (Admin)

PS C> winget install --interactive --exact dorssel.usbipd-win

Debian:

$ sudo apt-get install usbutils hwdata usbip

Powershell Admin:

PS C> usbipd wsl list
BUSID  VID:PID    DEVICE                                                        STATE
1-1    046d:c545  USB Input Device                                              Not attached
1-2    2357:0138  TP-Link Wireless MU-MIMO USB Adapter                          Not attached
1-4    1bcf:28c4  FHD Camera, FHD Camera Microphone                             Not attached
1-5    1a86:7523  USB-SERIAL CH340 (COM4)                                       Not attached
1-13   046d:c52b  Logitech USB Input Device, USB Input Device                   Not attached

PS C> usbipd wsl attach --busid 1-4
usbipd: info: Using default distribution 'Debian'.

PS C> usbipd wsl attach --busid 1-5
usbipd: info: Using default distribution 'Debian'.

PS C> usbipd wsl list
BUSID  VID:PID    DEVICE                                                        STATE
1-1    046d:c545  USB Input Device                                              Not attached
1-2    2357:0138  TP-Link Wireless MU-MIMO USB Adapter                          Not attached
1-4    1bcf:28c4  FHD Camera, FHD Camera Microphone                             Attached - Debian
1-5    1a86:7523  USB-SERIAL CH340 (COM4)                                       Attached - Debian
1-13   046d:c52b  Logitech USB Input Device, USB Input Device                   Not attached
1-23   0bda:9210  USB Attached SCSI (UAS) Mass Storage Device                   Not attached

Debian:

# lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 1bcf:28c4 Sunplus Innovation Technology Inc. FHD Camera Microphone
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

# python3 -m serial.tools.miniterm


--- Available ports:
---  1: /dev/ttyUSB0         'USB Serial'

docker-compose.yml

version: '2.4'

services:
  octoprint:
    image: octoprint/octoprint
    restart: unless-stopped
    ports:
      - 80:80
    devices:
    # use `python3 -m serial.tools.miniterm` , this requires pyserial
    #  - /dev/ttyACM0:/dev/ttyACM0
    #  - /dev/video0:/dev/video0
      - /dev/ttyUSB0
    volumes:
     - octoprint:/octoprint
    #environment:
    #  - ENABLE_MJPG_STREAMER=true

  ####
  # uncomment if you wish to edit the configuration files of octoprint
  # refer to docs on configuration editing for more information
  ####

  #config-editor:
  #  image: linuxserver/code-server
  #  ports:
  #    - 8443:8443
  #  depends_on:
  #    - octoprint
  #  restart: unless-stopped
  #  environment:
  #    - PUID=0
  #    - GUID=0
  #    - TZ=America/Chicago
  #  volumes:
  #    - octoprint:/octoprint

volumes:
  octoprint:

Success!

Docker volume backup and restore the easy way.

I haven’t had to move docker volumes around in a few years, but I finally had the need today. As usual, I searched for the process, knowing that most examples are… well… not very good. Well, as I almost resorted to pulling a manual job using ubuntu, I found a great write-up by Jarek Lipski on Medium. Here’s how you backup using alpine and tar. Also, make sure you “docker stop” the containers that use the volume, so you get a consistent backup.

Which containers use a volume?

docker ps -a --filter volume=[some_volume]

Backup using an alpine image with tar:

docker run --rm -v [some_volume]:/volume -v /tmp:/backup alpine tar -cjf /backup/[some_archive].tar.bz2 -C /volume ./

Restore:

docker run --rm -v [some_volume]:/volume -v /tmp:/backup alpine sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xjf /backup/[some_archive].tar.bz2"

Backup using loomchild/volume-backup

I love that Jarek also created an image to simplify further the process called loomchild/volume-backup. Here’s how the image works:

docker run -v [volume-name]:/volume -v [output-dir]:/backup --rm loomchild/volume-backup backup [archive-name]

Restore:

docker run -v [volume-name]:/volume -v [output-dir]:/backup --rm loomchild/volume-backup restore [archive-name]

What’s great is this method allows inline copying of a volume from one system to another using ssh. Here’s an example Jarek provides:

docker run -v some_volume:/volume --rm --log-driver none loomchild/volume-backup backup -c none - |\
     ssh user@new.machine docker run -i -v some_volume:/volume --rm loomchild/volume-backup restore -c none -

Home Lab KVM with MeshCommander

No Homelab Remote KVM? Intel Chipset? No Problem with Intel’s Management Engine and MeshCommander!

Mesh Commander is an application that can communicate to the Intel Management Engine (IME) available on most systems using an intel chipset. Once IME is configured, Mesh Commander will provide an entry point into the system and can provide a whole range of options from power cycling the system, remote controlling the system, and even accessing the BIOS. So how does it work? Here are the steps I go through to enable it:

Homelab Server Setup

In the system’s BIOS, look for Intel Management Engine (IME), Enable, then reset it. Make sure “Press <Ctrl-P> to Enter MEBx is enabled.

Take note of the key combination needed during the BIOS post. On my systems, to get into the IME settings, it’s <Ctrl-P>

The initial password is “admin” – Change it to your preferred password.

Go into the network settings and either keep the DHCP settings or use static IP. This is where it gets cool. The IME IP address will be enabled on the main ethernet port of the system ALONG WITH the IP the OS ends up using. And what’s even cooler? Say you disable the ethernet device in windows; it doesn’t disable the port. The port will remain available for IME functions.

Something I had to learn the hard way. If you plan on trunking multiple ethernet ports together, IME does not understand trunking/LACP/port aggregation, so it will not communicate properly.

MeshCommander Setup

Download the Windows MSI https://www.meshcommander.com/meshcommander

Install MeshCommander on your daily driver (regular client). There are other fancy options offered, but this will get you going.

Run MeshCommander

Add your home lab server to MeshCommand by manually entering the details with “Add Computer…” or be lazy and use “Scan…”

Click on the discovered systems and modify the settings. Use the password you set up earlier on the server.

Now Connect!

What to do, what to do… Remote Desktop? YES PLEASE!

If the screen is too small, you can change the font (debian):

dpkg-reconfigure console-setup

Other Tips

I had issues where the remote desktop does not work after Linux booted up and found it was due to the graphics driver disabling the GPU if no monitor detected. I couldn’t find a graceful work-around via software so I ended up going the HDMI Dummy Plug route. It terminates the HDMI lines so the system believes a monitor is there, and preventing the GPU from shutting down. https://a.co/d/9Cx13G7

Where’s the truth…

By not being on social media sites, I have a choice in what I consume from a news perspective. I want sources that give me the data to form my opinion rather than get opinions framed as news. So this is what works for me:

  1. Use a web browser “incognito” mode so tracking cookies aren’t used. This will help prevent news shaping. I use a web browser called Duck Duck Go that helps prevent tracking data from being used. I highly recommend using it for a browser on your phone and the search engine on your desktop. Once google has some history on you, the shaping algorithms will take hold quickly.
  2. I use multiple sources. If I watch CNN, I also try to watch Fox News and so on. It helps to see the different perspectives and the spin applied on all sides.
  3. For any of the “news” sites, I first look at the ad’s that I’m bombarded with. I want to determine right off the bat how the site makes money, and what generates the most revenue (typically its the product you see most of). They will usually not tell you directly, but you’ll get a sense of why they want you to buy while you stay on the site.
  4. I use Axios; they are trying to be a news source with little spin. You can see their mission statement here: https://www.axios.com/about/ They tell you how they gather information, how it’s gathered and qualified, and how they make money. Most of the content is short and concise with little fluff. I appreciate the attempt.
  5. I use allsides.com frequently. allsides is an aggregate site that tried to rate news stories as right, left, and center. It’s a good site to get an honest perspective on things. Their take is no news is unbiased, so they show you how. There was an exciting science project from Middle Schooler where they looked at bias in google search engine using data from allsides. More detail on the science project is located at:
    https://www.allsides.com/blog/teen-proves-media-bias-google-search-results-can-influence-political-opinions
  6. Is the content focused on the subject, and is the opinion kind? If the content attacks a person rather than a position, it’s typically because the position they are trying to make is weak and doesn’t hold up well on its own.

I’m sure I can keep rambling, but the above list encompasses most of what I do. Let me know if you have better methods!

Add HEIC support to nextcloud

From https://eplt.medium.com/5-minutes-to-install-imagemagick-with-heic-support-on-ubuntu-18-04-digitalocean-fe2d09dcef1

sudo sed -Ei 's/^# deb-src /deb-src /' /etc/apt/sources.list
sudo apt-get update
sudo apt-get install build-essential autoconf libtool git-core
sudo apt-get build-dep imagemagick libmagickcore-dev libde265 libheif
cd /usr/src/ 
sudo git clone https://github.com/strukturag/libde265.git  
sudo git clone https://github.com/strukturag/libheif.git 
cd libde265/ 
sudo ./autogen.sh 
sudo ./configure 
sudo make  
sudo make install 
cd /usr/src/libheif/ 
sudo ./autogen.sh 
sudo ./configure 
sudo make  
sudo make install 
cd /usr/src/ 
sudo wget https://www.imagemagick.org/download/ImageMagick.tar.gz 
sudo tar xf ImageMagick.tar.gz 
cd ImageMagick-7*
sudo ./configure --with-heic=yes 
sudo make  
sudo make install  
sudo ldconfig
sudo apt install php-imagick
cd /usr/src/ 
wget http://pecl.php.net/get/imagick-3.4.4.tgz
tar -xvzf imagick-3.4.4.tgz
cd imagick-3.4.4/
apt install php7.2-dev
phpize
./configure
make
make install
sudo phpenmod imagick

A restart of apache2 should finish the job. Check with the phpinfo() call…

sudo systemctl restart apache2
php -r 'phpinfo();' | grep HEIC
You should see:
ImageMagick supported formats => 3FR, 3G2, 3GP, A, AAI, AI, ART, ARW, AVI, AVS, B, BGR, BGRA, BGRO, BIE, BMP, BMP2, BMP3, BRF, C, CAL, CALS, CANVAS, CAPTION, CIN, CIP, CLIP, CMYK, CMYKA, CR2, CRW, CUBE, CUR, CUT, DATA, DCM, DCR, DCRAW, DCX, DDS, DFONT, DJVU, DNG, DPX, DXT1, DXT5, EPDF, EPI, EPS, EPS2, EPS3, EPSF, EPSI, EPT, EPT2, EPT3, ERF, EXR, FAX, FILE, FITS, FLV, FRACTAL, FTP, FTS, G, G3, G4, GIF, GIF87, GRADIENT, GRAY, GRAYA, GROUP4, HALD, HDR, HEIC,...

The Social Dilemma

I thought I understood the general concepts and algorithms that companies like google, Facebook, twitter, etc. use but I was astounded about how much it impacts us as a society. The documentary, “The Social Dilemma”, on Netflix, is filled with conversations with many of the original architects of these systems and how monetization though ad targeting is driving behavior modification of billions of people worldwide.

The Social Dilemma also goes on to explain how our younger populations are being affected and correlates the dramatic increase in many conditions like anxiety are due the nature of keeping someone always engaged in a platform for monetary gain.

I already started getting off a number of social platforms, Facebook and Instagram being the latest – but now I’m really concerned about how being online is effecting my daughters.

What’s the answer? I don’t know but I can tell you that I am more willing than ever to pay for services that are not ad driven. I already have a pi-hole for ad blocking, and use cleanbrowsing.org for DNS filtering. But what do you do that when you use gmail? Use a iPhone or Google Android phone? Is it flip phone time again? I don’t know what to think really. And that’s a good thing.

What I can say is I would highly recommend the documentary.

https://www.thesocialdilemma.com/

https://www.humanetech.com/take-control

zfs glory and snaphot hell

ZFS on TrueOS: Why We Love OpenZFS - TrueOS

This page is to document my trials with zfs snapshots for backup purposes. There a problem I found that entails when incremental snapshot sends are performed when the receive side has changed in some way. I’ll provide complete details soon. Good news my zfs retention script looks to be running well. I’ll document that as well soon. Here’s a teaser…

So long, Facebook, and Thanks for all the Fish …

Good Morning!
After not being active on Facebook for almost a year now I made the move to completely delete my account.  While it was surprisingly tough initially it was a great decision.  I realized all the ads and shaped content was not worth the family and friend connection I was actually seeking.  My account on Instagram will probably be deleted soon as well.  I’m getting ads and such on that platform as well.  It’s not surprising since Instagram is also owned by Facebook.

image.png

I’m available through more conventional, old school means, and I am slowly updating my web site so I can communicate on my own terms without pushing content on anyone.  I do have a means to share photos out for the family so if you’re interested let me know and I’ll send you a link to my own personal cloud share.

Thanks and I hope to hear from you sometime!

Password-less ssh in 2 Glorious Steps…

Local System – Let’s call it alpha
Remote System we don’t want to have to enter passwords for,
Let’s call it foxtrot

Prep: Harden your existing ssh keys since RSA 1024 sucks. This will create a new 4096 version – ed22519 is actually preferred so you can skip the rsa creation if preferred.

me@alpha$ mv ~/.ssh/id_rsa ~/.ssh/id_rsa_legacy
me@alpha$ mv ~/.ssh/id_rsa.pub ~/.ssh/id_rsa_legacy.pub

Step 1: Generate new keys:

me@alpha$ ssh-keygen -t rsa -b 4096 -o -a 100   #RSA version

me@alpha$ ssh-keygen -o -a 100 -t ed25519 #Preferred ed25519 version

Step 2: Copy the Ed25519  keys to the remote system called foxtrot:

me@alpha$ ssh-copy-id -i ~/.ssh/id_ed25519.pub me@foxtrot

If ssh-copy-id is not available (powershell, etc.) manually copy the public key to the other host:

me@alpha$ cat ~/.ssh/id_ed25519.pub | ssh me@foxtrot "cat >> ~/.ssh/authorized_keys"


DONE!
 Now verify you can actually ssh without a password:

me@alpha$ ssh me@foxtrot
me@foxtrot:~$ hostname
foxtrot
me@foxtrot:~$

You can also check your ~/.ssh/authorized_key file for duplicate or old entries, especially if you used old garbage RSA 1024 or less keys in the past.

Additional Reference: Manually copy the keys (This will ask you the password of the user you have mentioned):

me@alpha$ scp ~/.ssh/id_ed25519.pub me@foxtrot:~
me@alpha$ cat id_rsa.pub >> /home/user/.ssh/authorized_keys

Fancy way of doing the same thing (tee takes stdin and appends it to file):

me@alpha$ cat ~/.ssh/id_ed25519.pub | ssh jarvis tee -a ~/.ssh/authorized_keys

Wait… what about powershell? 

ssh-copy-id isn’t available so you can use the following:

$publicKey = Get-Content $env:USERPROFILE\.ssh\id_ed25519.pub
/authorized_keys"

ssh user@remotehost "mkdir -p ~/.ssh; echo '$publicKey' >> ~/.ssh/authorized_keys; chmod 700 ~/.ssh; chmod 600 ~/.ssh

Thanks to the following sites for easily explaining this process:
https://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/
https://blog.g3rt.nl/upgrade-your-ssh-keys.html
https://www.ionos.com/digitalguide/server/security/using-ssh-keys-for-your-network-connection/

 

HomeLab Build

Since I had a old windows laptop as a plex and file server for years I thought it would be good to try something new. After researching options I ddecided to try FreeNAS. Since it has ZFS and I’m an old Sun guy – why not. Well…. After a few weeks I decided to abandon FreeNAS and roll my own using a ThinkCentre M93p Tiny. I’ll try to post some notes on how the build goes.

Raspberry Pi backup using fsarchiver and other tricks

So I ran into a few issues using the dd image backup I referenced prior Raspberry Pi 3 SDCard backup

  1. The Image is very large even though the data was not.  For example on a 32GB SD card I was getting a 12GB file.  I only have 3GB of data! so that was a bummer.
  2. When it comes time to recover, I have to expand the gz image file to a full 32GB to then image it onto another SD device.  There’s tricks around this I’m sure but still.
  3. Since dd was reading 100% of the SD card (/dev/mmcblk0) even with compression it took a LONG time to create the image.  20 minutes or so.  Since I’m backing up a live system this was a real issue.

I did manage to figure out how to create a partial image if your partition sizes were smaller than the actual device – This seemed to work but it still was storing 6.6GB of data which was over double what I actually had:

Trimmed SD Image…

root@webpi:/mnt/usb# blockdev --getsize64 /dev/mmcblk0p1 /dev/mmcblk0p2
66060288
8929745920
root@webpi:/mnt/usb# echo `blockdev --getsize64 /dev/mmcblk0p1` `blockdev --getsize64 /dev/mmcblk0p2` + p | dc
8995806208
root@webpi:/mnt/usb# dd if=/dev/mmcblk0 conv=sync,noerror iflag=count_bytes count=8995806208 \
| gzip > /mnt/usb/webpi.trimmed.img.gz

Still not good enough….  Any I might have to tweak the count to make sure I’m not missing the last little piece of the lasat partition since we would have partition data in front of the partitions.

So…

To remedy a few issues,  I researched other ways to backup.  I came to the conclusion that fsarchiver was a decent fit.  Simple to use and only backs up data.  The downside was I would have to use another Linux system to reconstruct the SD card.  I can’t just blast a image write to a SD card and call it good.

Here are the steps.  Since fsarchiver doesn’t support vfat I had to make a dd image of the 66MB vfat boot partition.  Not a big deal.  The newer fsarchiver supports vfat;  I just didn’t want to install the packages need to do a full compile for the latest.

Benefits:  Much faster.  take 5 minutes total.  Much smaller data footprint – 3GB of data is storing in a  2.2GB image!
Downside:  Not one image – need to do some recovery with another Linux system with a SD card loaded.  Since I have a Pi setup for VPN and such that’s not a problem for me.

Disclaimer – I’m only posting this stuff to help me remember what I did and possibly help others that understand how to not shoot themselves in the foot.  Please be very careful in trying any of this stuff.  Depending on your situation it may not apply.

Raspberry Pi Backup using fsarchiver

  1. # Quiesce any major services that might write…
    service apache2 stop
    service mysql   stop
    service cron    stop
  2. # Save the Partition Table for good keeping…
    sfdisk -d /dev/mmcblk0 > /mnt/usb/webpi.backup.sfdisk-d_dev_mmcblk0.dump
  3. # Save the vfat boot partition
    dd if=/dev/mmcblk0p1 conv=sync,noerror | gzip > /mnt/usb/webpi.backup.dd_dev_mmcblk0p1.img.gz
  4. # Save the main OS image efficiently…
    fsarchiver savefs -A -j4 -o /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa /dev/mmcblk0p2
  5. # Restart the services…
    service cron    start
    service mysql   start
    service apache2 start

Raspberry Pi Restore using fsarchiver

  1. # put a new SD card in a card reader and plugged it 
    # into a raspberry pi - showed up as /dev/sdb
  2. # Restore the partition table
    sfdisk /dev/sdb < /mnt/usb/webpi.backup.sfdisk-d_dev_mmcblk0.dump
  3. # Restore the vfat partition
    gunzip -c /mnt/usb/webpi.backup.dd_dev_mmcblk0p1.img.gz | dd of=/dev/sdb1 conv=sync,noerror
  4. # Run fsarchiver archinfo to verify you have a fsarchiver file and 
    # determine which partition you want to recover if you did multiple partitions
    fsarchiver archinfo /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa 
    ====================== archive information ======================
    Archive type:                   filesystems
    Filesystems count:             1
    Archive id:                     5937792d
    Archive file format:           FsArCh_002
    Archive created with:           0.6.19
    Archive creation date:         2017-06-12_07-51-00
    Archive label:                 <none>
    Minimum fsarchiver version:     0.6.4.0
    Compression level:             3 (gzip level 6)
    Encryption algorithm:           none
    ===================== filesystem information ====================
    Filesystem id in archive:       0
    Filesystem format:             ext4
    Filesystem label:
    Filesystem uuid:               8a9074c8-46fe-4807-8dc9-8ab1cb959010
    Original device:               /dev/mmcblk0p2
    Original filesystem size:       7.84 GB (8423399424 bytes)
    Space used in filesystem:       3.37 GB (3613343744 bytes)
  5. # Run the restfs option for fsarchiver
    fsarchiver restfs /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa id=0,dest=/dev/sdb2
    filesys.c#127,devcmp(): Warning: node for device [/dev/root] does not exist in /dev/
    Statistics for filesystem 0
    * files successfully processed:....regfiles=59379, directories=6999, symlinks=5774, hardlinks=331, specials=80
    * files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
  6. Run sync for warm fuzzies...
    #sync;sync;sync

 

Worked like a CHAMP!