NIXOS MIGRATION MANIFESTO

TARGET: Pop!_OS -> NixOS Bare Metal Host
WORKLOAD: 30+ Docker Compose Containers (Plex, Immich, ML, AI Agents)
OBJECTIVE: Zero Configuration Drift, 100% Reproducibility, Persistent File Permissions.

1. The Philosophical Imperative (The "Why")

The current homelab relies on Pop!_OS, a traditional imperative Linux distribution. Over time, imperative systems suffer from "Configuration Drift"—a slow degradation of system state caused by undocumented `apt` installs, orphaned dependencies, manual `fstab` edits, and untracked changes in `/etc/`. This makes disaster recovery a high-friction, multi-hour manual process relying on human memory.

NixOS forces a Declarative Paradigm. The entire operating system—firewall rules, kernel parameters, hardware drivers, user UID/GIDs, and disk mounts—is defined mathematically in a single `configuration.nix` file. If the boot drive catastrophically fails, deploying NixOS with this configuration file guarantees a 1-to-1 bit-identical recreation of the host environment in minutes. For a heavy Docker workload, NixOS acts as the ultimate immutable hypervisor.

2. Phase 1: Pre-Migration State Extraction (The "When")

WHEN: Execute this immediately prior to flashing the NixOS installer and wiping the primary Pop!_OS boot drive. Do not skip this step; failure to map UIDs will result in catastrophic permission denied errors across the Plex and Docker data arrays.

We must extract the exact block device UUIDs and the legacy User/Group IDs. Docker containers (specifically `linuxserver.io` images) rely heavily on numerical PUID and PGID values mapping directly to host file ownership.

A. Extracting Block Devices

NixOS requires explicit UUID mapping to mount your external storage arrays (`/mnt/PlexStorage`, `/mnt/PlexDrive`). Execute the following on Pop!_OS:

# Extract comprehensive block data
sudo lsblk -f -o NAME,FSTYPE,UUID,MOUNTPOINT > /root/pre_nixos_lsblk.txt

# Backup the fstab to capture specific mount options (e.g., ZFS/Btrfs flags)
cat /etc/fstab > /root/pre_nixos_fstab.txt

B. Extracting Identity Mappings

The compose files indicate `PUID=1000` (palav) and `PGID=110` (used for hardware rendering/Plex). We must record exactly what group 110 maps to on Pop!_OS (usually `render` or `video`) so we can force NixOS to create that exact numerical group.

# Confirm the user ID
id palav

# Determine the exact group mapped to 110
getent group 110 > /root/pre_nixos_gid110.txt

3. Phase 2: The NixOS Architecture (The "How")

After booting the NixOS live installer and generating the initial config, the following modules must be injected into `/etc/nixos/configuration.nix`.

A. UID/GID Parity Enforcement

By forcing NixOS to use specific numerical IDs, the legacy files on the unformatted `/mnt/PlexStorage` drives remain perfectly accessible to the new OS and its containers.

# Ensure the 'render' group uses the exact GID 110 required by Plex
users.groups.render = {
  gid = 110;
};

users.users.palav = {
  isNormalUser = true;
  uid = 1000;
  description = "Primary Admin";
  extraGroups = [ "wheel" "docker" "render" "video" ];
};

B. Hardware Acceleration & Passthrough

The `Plex` container requires `/dev/dri` access, and the `Doraemon` container may require NVIDIA toolkit access. NixOS manages proprietary drivers declaratively.

# Enable OpenGL and Intel Media Drivers
hardware.graphics = {
  enable = true;
  extraPackages = with pkgs; [
    intel-media-driver
    intel-vaapi-driver
  ];
};

# If NVIDIA is present in the machine:
services.xserver.videoDrivers = ["nvidia"];
hardware.nvidia = {
  modesetting.enable = true;
  open = false; # Proprietary drivers required for NVENC
};

# Expose the NVIDIA Container Toolkit to the Docker Engine
hardware.nvidia-container-toolkit.enable = true;

C. Container Engine Configuration

Enable the Docker daemon natively. NixOS does not support local Compose builds natively in its `oci-containers` module, so we retain the standard Docker daemon.

virtualisation.docker = {
  enable = true;
  # Configure log rotation natively to prevent host disk exhaustion
  daemon.settings = {
    "log-driver" = "json-file";
    "log-opts" = {
      "max-size" = "10m";
      "max-file" = "3";
    };
  };
};

environment.systemPackages = with pkgs; [
  docker-compose # Ensure compose CLI is available
  git
  fzf
  tmux
];

D. Automated Systemd Boot Orchestrator

Because we are avoiding NixOS's native `oci-containers` to maintain the custom `build:` directives in your compose files, we must write a Systemd service to bootstrap the homelab automatically on boot.

This script intelligently traverses the `~/docker` and `~/Projects` directories, finds valid `docker-compose.yml` files, and initiates them.

systemd.services.homelab-autostart = {
  description = "Bootstrapper for Homelab Docker Compose Stacks";
  wantedBy = [ "multi-user.target" ];
  after = [ "docker.service" "network-online.target" ];
  requires = [ "docker.service" ];
  
  path = with pkgs; [ docker docker-compose bash findutils ];
  
  script = ''
    #!/bin/bash
    TARGET_DIRS=("/home/palav/docker" "/home/palav/Projects")
    
    for dir in "''${TARGET_DIRS[@]}"; do
      if [ -d "$dir" ]; then
        # Find all compose files exactly one directory deep
        find "$dir" -mindepth 2 -maxdepth 2 -type f \( -name "docker-compose.yml" -o -name "docker-compose.yaml" \) -print0 | while IFS= read -r compose_file; do
          stack_dir=$(dirname "$compose_file")
          echo "[Homelab-Init] Initiating stack in $stack_dir"
          cd "$stack_dir" || continue
          
          # Pull latest images (non-build) and start detached
          docker compose pull --ignore-buildable
          docker compose up -d --remove-orphans
        done
      fi
    done
  '';
  
  serviceConfig = {
    Type = "oneshot";
    RemainAfterExit = true;
    User = "palav";
    Group = "docker";
    TimeoutStartSec = "20min"; # Accommodate massive image pulls on first boot
  };
};

4. Final Execution Strategy

To finalize the migration, the administering AI must execute:

  1. sudo nixos-rebuild switch to compile the operating system state.
  2. Verify hardware rendering passthrough: ls -l /dev/dri (Ensure `/dev/dri/renderD128` is owned by `render` group 110).
  3. Verify the Docker Daemon is active: systemctl status docker.
  4. Monitor the automated bring-up sequence: journalctl -u homelab-autostart -f.

Once complete, the host OS is effectively immortalized in code.