nix

Nix

Stay frosty like Tony

A home for my system configurations using Nix Flakes Be warned, I’m still learning and experimenting.

Nothing here should be construed as a model of good work!… yet. Y’know, I’m starting to feel pretty good about this.

Features and Todo

Diagram of Earth's layers

Bedrock (Networking)

Substratum (Virtualization and Systems)

Subsoil (Foundational Services)

Topsoil (Kubernetes)

foolish mortals

Organics (Applications and nice-to-haves)

Implementation Notes

Bedrock

Annoyingly complicated networking

Substratum

Pre-requisites:

HP EliteDesk 800 G3 Micro/Mini.

  1. Mash F10/Esc to hit the bios (this was a thowback and a pain to do). Or just use systemctl reboot --firmware-setup ~ Future Ariel.
  2. Update the BIOS
  3. Load the settings from HpSetup.txt OR follow along the rest.
  4. Configure the following
    • Ensure legacy boot is enabled.
    • I disabled secure boot and MS certificate in case
    • Turn off fast boot (might be optional)
    • Add boot delay 5 seconds (purely QoL)
    • Ensure USB takes priority over local disk
    • I disabled prompt on memory change so if I add RAM later I don’t have to displace the system.
    • I disabled Intel’s sgx or whatnot. Don’t trust it after the RST debacle.
  5. Save and reboot
  6. Hit escape to select boot option of USB (esc maybe not required)
  7. Follow the instructions to install NixOS
    • 23.05 (but higher is fine)
    • User nixos
    • Same password for root
    • Auto login (QoL but consult your threat model)
  8. Use nix-shell to obtain Git and Helix
  9. Clone this flake repo from github
  10. Copy the machine-specific disk config from /etc/nixos/hardware-configuration.nix. Place it in the machine’s hardware-configuration.nix in the flake repo. (This step may no longer be necessary)
  11. Nix rebuild switch to the flake’s config.
  12. Confirm SSH remote access is working.
  13. Reboot and enter bios.
    • Turn fast boot back on
    • Set boot delay to 0
    • Disable UEFI boot priority. If we need to boot from USB we’ll reenter the BIOS.
  14. Save BIOS changes and one last confirmation that the system boots and is remotable.
  15. Move the machine to it’s final home.
  16. Remotely retrieve the hardware configuration and commit it to the flake repo.

Topton N100 (CW-AL-4L-V1.0 N100)

  1. Download BIOS update and place on Ventoy USB.
  2. Mash F10 to enter BIOS, boot update. Enter 1 and it should update.
  3. Reset and wait, it will beep and hang and reboot but eventually it should come good.
  4. Set USB boot precendence above internal drive/s
  5. Boot Proxmox installer and walk through Set static IP with netmask as same as router’s DHCP netmask. Best I can tell this is required to send traffic back to origin.
  6. Highly recommended but optionally, trust your SSH keys. curl https://github.com/arichtman.keys >> ~/.ssh/authorized_keys
  7. Optionally, add static DHCP lease to the router. If you do this, you can also optionally remove the fixed interface configuration. Edit /etc/network/interfaces and switch the virtual bridge network configuration from manual to dhcp.
  8. Optionally, install trusted certificates. Instructions are on my blog.
  9. Run some of the proxmox helper scripts At least the post install one to fix sources. I also ran the microcode update, CPU scaling governor, and kernel cleanup (since I had been operating for a while).
  10. Enable IOMMU. First, check GRUB/systemd efibootmgr -v. If GRUB, sed -i -r -e 's/(GRUB_CMDLINE_LINUX_DEFAULT=")(.*)"/\1\2 intel_iommu=on"/' /etc/default/grub
  11. echo 'vfio vfio_iommu_type1 vfio_pci vfio_virqfd' >> /etc/modules
  12. Reboot to check config
  13. Set BIOS settings:
    • Boot:
      • Disable beep
      • Enable fast boot
      • Enable network stack - Chipset:
    • PCH-IO:
      • Enable Wake on lan and BT
      • Enable TCO timer
  14. Install Prometheus node exporter, apt install prometheus-node-exporter.
  15. Install Avahi daemon to enable mDNS, apt install avahi-daemon.
  16. Install grub package so actual grub binaries get updates, apt install grub-efi-amd64.
  17. Optionally comment out the Cron job on reboot that sets it to power save.
  18. Disable IPMI service since we don’t have support, systemctl disable openipmi.

If I check /etc/grub.d/000_ proxmox whatever it says update-grub isn’t the way and to use proxmox-boot-tool refresh. It also looks like there’s a specific proxmox grub config file under /etc/default/grub.d/proxmox-ve.cfg. I don’t expect it hurts much to have iommu on as a machine default, and we’re not booting anything else… Might tidy up the sed config command though. Looking at the systemd we could probably do both without harm. That one’s also using the official proxmox command.

References:

Proxmox Disk Setup

We did run mkfs -t ext4 but it didn’t allow us to use the disk in the GUI. So using GUI we wiped disk and initialized with GPT.

For the USB rust bucket we found the device name with fdisk -l. Then we mkfs -t ext4 /dev/sdb, followed by a mount /dev/sdb /media/backup. Never mind, same dance with the GUI, followed by heading to Node > Disks > Directory and creating one.

Use blkid to pull details and populate a line in /etc/fstab for auto remount of backup disk. Ref

/etc/fstab:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=C61A-7940 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=b35130d3-6351-4010-87dd-6f2dac34cfba /mnt/pve/Backup ext4 defaults,nofail,x-systemd.device-timeout=5 0 2

Re-IDing a Proxmox VM

I used this to shift OPNsense to 999 and any templates to >=1000.

  1. Stop VM
  2. Get storage group name lvs -a
  3. Rename disk lvrename prod vm-100-disk-0 vm-999-disk-0
  4. Enter /etc/pve/nodes/proxmox/qemu-server
  5. Edit conf file to use renamed disk.
  6. Move conf file to new id

Adding watchdog to a proxmox VM

  1. Add watchdog: model=i6300esb,action=reset to the conf file in /etc/pve/qemu-server/.
  2. Stop and start the VM.

Virtual node disk resize

nix-shell -p cloud-utils
growpart /dev/sda 1
resize2fs /dev/sda1

Opnsense

VM Setup
  1. Download iso and unpack
  2. Move iso to /var/lib/vz/template/iso
  3. Create VM with adjustments: I’m trying 2 cores now, utilization was low but we had spikes which I suspect were system stuff
    • Start at boot
    • SSD emulation, 48GiB
    • 1 socket+ 2 cores, NUMA enabled
    • 2048 MiB RAM
  4. Use proxmox under datacenter to configure a backup schedule. The following should keep a rolling 4-weekly history.
    • Sunday 0100
    • Notify only on fail
    • Keep weekly 4
  5. Boot machine and follow installer
  6. Add PCIe ethernet controllers
Base OS Setup
  1. Boot system and root login
  2. Assign WAN and LAN interfaces to ethernet controllers
  3. Check for updates either opkg update or maybe system > firmware
  4. Add static DHCP leases for any machines using static IPs so Upbound will serve records for them
  5. Install an intermediate cert and it’s corrosponding bundle under system > trust
  6. Switch to using the TLS certificate under System > Settings > Administration
  7. Set both interfaces to delete protected and IPv6 SLAAC
  8. Under System > General:
    • set hostname
    • set domanin
    • configure DNS servers
    • Disallow DNS override on WAN
  9. Reporting > netflow set capture on
  10. System > settings > cron
    • once daily to update the block lists
    • once weekly after the backup is taken (this ensures we can restore)
DNS Configuration
  1. Configure Upbound DNS service
    • enable DNSSEC
    • enable DHCP lease registration
    • Disallow system nameservers in DoT and add records with blank domains+port 853
    • Enable blocklist and use OISD Ads only (to be experimented with)
    • Enable data capture
Firewall Configuration
  1. Firewall
    • Add aliases for static boxes, localhost
    • Create a NAT port-forward:
      • LAN interface
      • IPv4+6
      • TCP+UDP
      • Invert
      • Destination LAN net
      • from dns to dns
      • Redirect target Localhost:53
  2. Test
    • DNS redirection:
      • Unbound host override bing.com to something
      • Check this returns the override dig +trace @4.4.4.4 bing.com
    • Ad blocking https://d3ward.github.io/toolz/adblock.html
OpenVPN

Follow one of the 6000 tutorials AKA yes, I forgot to document it.

WireGuard

Follow tutorial AKA forgot to document it. See also wg0.conf in this repo.

Plugins

Notes:

I will revisit the resources supplied after running the box for a bit.

CPU seems fine, spikey with what I think are Python runtime startups from the control layer. RAM looks consistently under about 1Gb so I’ll trim that back from the recommended minimum 2Gb. We’re doing pretty well on space too but I’m less short on that.

References:

Virtualized NixOS Node Bootstrap

  1. Use GUI installer to deploy, the CLI one’s a pain, 12GiB storage minimum, 4 cores/8GiB ram (min 4+ GiB). We’ll scale it down later, it bombs completely without plenty of RAM.
  2. Thing’s a pain to bootstrap and the web console is limited Sudo edit /etc/nixos/configuration.nix
    • Enable the openssh service
    • security.sudo.wheelNeedsPassword = false
    • Disable OSprober by removing the line
    • Set disk source to /dev/sda
    • I’m not sure those last 2 make much of a difference once the machine is under flake control
  3. Rebuild
  4. Bounce the machine so it releases using the new hostname
  5. Then pull down some keys to get in, it should have already DHCP’d over the bridge network. mkdir ~~/.ssh && curl https://github.com/arichtman.keys -o ~/.ssh/authorized_keys && chmod 600 ~~/.ssh/authorized_keys
  6. Upgrade the system sudo nixos-rebuild switch --upgrade --upgrade-all
  7. Reboot to test
  8. Clear history sudo nix-rebuild list-generations sudo rm /nix/var/nix/profiles/system-#-profile sudo nix-collect-garbage --delete-old
  9. Adjust hardware down to 1/2GiB.
  10. Make template

Subsoil

Trust chain setup

Arguably this mingles with substratum, as PKI/trust/TLS is required or very desirable for VPN/HTTPS etc.

  1. Create root CA xkcdpass --delimiter - --numwords 4 > root-ca.pass step certificate create "ariel@richtman.au" ./root-ca.pem ./root-ca-key.pem --profile root-ca --password-file ./root-ca.pass
  2. Make node directories cause this is going to get messy <nodes.txt xargs mkdir
  3. Create password files <nodes.txt xargs -I% sh -c 'xkcdpass --delimiter - --numwords 4 > "./$1/$1-pass.txt"' -- %
  4. Secure them chmod 400 *.pass
  5. Create intermediate CAs <nodes.txt xargs -I% step certificate create % ./%/%.pem ./%/%-key.pem --profile intermediate-ca --ca ./root-ca.pem --ca-key ./root-ca-key.pem --ca-password-file root-ca-pass.txt --password-file ./%/%-pass.txt
  6. Distribute the intermediate certificates and keys
  7. Secure the root CA, it’s a bit hidden but Bitwarden does take attachments.
  8. Publish the root CA, with my current setup this meant uploading it to s3.
  9. Update the sha256 for the root certificate fetchUrl call

Garage setup

garage layout assign --zone garage.services.richtman.au --capacity 128GB $(garage node id 2>/dev/null)
garage layout apply --version 1

Topsoil

Cluster access bootstrap
# Create a client certificate with admin
step certificate create cluster-admin cluster-admin.pem cluster-admin-key.pem \
  --ca ca.pem --ca-key ca-key.pem --insecure --no-password --template granular-dn-leaf.tpl --set-file dn-defaults.json --not-after 8760h \
  --set organization=system:masters
# Construct the kubeconfig file
# Here we're embedding certificates to avoid breaking stuff if we move or remove cert files
kubectl config set-cluster home --server https://fat-controller.local:6443 --certificate-authority ca.pem --embed-certs=true
kubectl config set-credentials home-admin --client-certificate cluster-admin.pem --client-key cluster-admin-key.pem --embed-certs=true
kubectl config set-context --user home-admin --cluster home home-admin

Cluster access (normal)

  1. Create private key openssl genpkey -out klient-key.pem -algorithm ed25519
  2. Create CSR openssl req -new -config klient.csr.conf -key klient-key.pem -out klient.csr
  3. export KLIENT_CSR=$(base64 klient.csr | tr -d "\n")
  4. Submit the CSR to the cluster envsubst -i klient-csr.yaml | kubectly apply -f -
  5. Approve the request kubectl certificate user approve

Cluster node roles

For security reasons, it’s not possible for nodes to self-select roles. We can label our nodes using this:

# Fill in the blanks
k label no/fat-controller node-role.kubernetes.io/master=master
k label no/fat-controller kubernetes.richtman.au/ephemeral=false

k label no/mum node-role.kubernetes.io/worker=worker
k label no/mum kubernetes.richtman.au/ephemeral=false

k label no/patient-zero node-role.kubernetes.io/worker=worker
k label no/patient-zero kubernetes.richtman.au/ephemeral=true
k label no/dr-singh node-role.kubernetes.io/worker=worker
k label no/dr-singh kubernetes.richtman.au/ephemeral=true
k label no/smol-bat node-role.kubernetes.io/worker=worker
k label no/smol-bat kubernetes.richtman.au/ephemeral=true

# Now we can clean up shut down nodes
k delete no -l kubernetes.richtman.au/ephemeral=true

hmmm, deleting the nodes (reasonably) removes labels. …and since they can’t self-identify, we have to relabel every time. I expect taints would work the same way, so we couldn’t use a daemonset or spread topology with labeling privileges since it wouldn’t know what to label the node. Unless… we deploy it with a configMap? That’s kinda lame. I suppose all the nodes that need this are dynamic, ergo ephemeral and workers, so we could make something like that. Heck, a static pod would work fine for this and be simple as. But then it’d be a pod, which is a continuous workload, which we really don’t need. A job would suit better, but then it’s like, why even run this on the nodes themselves? Have the node self-delete (it’ll self-register again anyway), and have the admin box worry about admin like labelling. I wonder if there’s any better way security-wise to have nodes be trusted with certain labels. Already they need apiServer-trusted client certificates, it’d be cool if the metadata on those could determine labels.

Addon-manager

Apparently this is deprecated as of years ago but is still shambling along. As much as I’d love to declaratively bootstrap the cluster it will be less headache to have a one-off CD app install and do the rest declaratively that way. Anywho - to make addon manager actually work, you need to drop a .kube/config file in /var/lib/kubernetes.

Removing coredns shenanigans: k delete svc/kube-dns deploy/coredns sa/coredns cm/coredns clusterrole/system:kube-dns clusterrolebinding/system:kube-dns

Node CSRs piling up

kubectl get csr --no-headers -o jsonpath='{.items[*].metadata.name}' | xargs -r kubectl certificate approve

Notes

Checking builds manually: nix build .#nixosConfigurations.fat-controller.config.system.build.toplevel Minimal install ~3.2 gigs Lab-node with master node about 3.2 gb also, so will want more headroom.

Add to nomicon

Mobile setup

Using tasker

Profile: AutoPrivateDNS
         State: Wifi Connected [ SSID:sugar_monster_house MAC:* IP:* Active:Any ]
     Enter: Anon
         A1: Custom Setting [ Type:Global Name:private_dns_mode Value:opportunistic Use Root:Off Read Setting To: ]
     Exit: Anon
         A1: Custom Setting [ Type:Global Name:private_dns_mode Value:hostname Use Root:Off Read Setting To: ]

nix shell nixpkgs#android-tools -c adb shell pm grant net.dinglisch.android.taskerm android.permission.WRITE_SECURE_SETTINGS

References:

Desktops

Mac

Trust chain system install: sudo security add-trusted-cert -r trustRoot -k /Library/Keychains/System.keychain -d ~/Downloads/root-ca.pem

MBP M2 setup

  1. Update everything softwareupdate -ia
  2. Optionally install rosetta softwareupdate --install-rosetta --agree-to-license I didn’t explicitly install it but it’s on there somehow now. There was some mention that it auto-installs if you try running x86_64 binaries.
  3. Determinant systems install nix curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
  4. Until this is resolved https://github.com/LnL7/nix-darwin/issues/149 sudo mv /etc/nix/nix.conf /etc/nix/.nix-darwin.bkp.nix.conf
  5. Nix-Darwin build and run installer
nix-build https://github.com/LnL7/nix-darwin/archive/master.tar.gz -A installer
./result/bin/darwin-installer

edit default configuration.nix? n
# Accept the option to manage nix-darwin using nix-channel or else it bombs
manage using channels? y
add to bashrc y
add to zshrc? y
create /run? y
# a nix-channel call will now fail
  1. Bootstrapping
  2. do the xcode-install method
  3. Build manually once nix build github:arichtman/nix#darwinConfigurations.macbook-pro-work.system
  4. Switch manually once ./result/sw/bin/darwin-rebuild switch --flake .#macbook-pro-work
  5. If bootstrapped, build according to flake ./result/sw/bin/darwin-rebuild switch --flake github:arichtman/nix

Universal Blue

some very wip notes about the desktop.

DNS

Some diagnostic tests for mDNS:

export HOST_NAME=fat-controller.local.
# This is our bedrock of truth. It works consistently and can be easily viewed
avahi-resolve-host-name $HOST_NAME
tcpdump udp port 5353 # Optionally -Qin
# Supposedly a good test according to Arch wiki, has not once worked for me.
getent ahosts $HOST_NAME
# Sometimes worked but timed out on a 3rd imaginary server. Most verbose but leaks mDNS queries.
dig $HOST_NAME
# Sometimes worked but not very helpful output.
host $HOST_NAME

# Convenience aliases
alias rollall='sudo systemctl restart NetworkManager systemd-resolved systemd-networkd; sudo systemctl daemon-reload'

alias dtest4='dig -4 $HOST_NAME'
alias dtest6='dig -6 $HOST_NAME'
alias htest4='host -4 $HOST_NAME'
alias htest6='host -6 $HOST_NAME'
alias etest='getent ahosts $HOST_NAME'

alltest() {
  dtest4
  dtest6
  htest4
  htest6
  etest
}

alias nm=nmcli
alias rc=resolvectl
alias as=authselect

So, turns out this whole resolution chain is a mess, some things use nsswitch, others don’t etc. We want consistent behaviour and caching, so we need the local stub resolver. We want it even more if we’re switching networks and VPNs as it can hold all the logic for changing shit.

Here’s some locations and commands for config. I tried valiantly to enable it at connection level and in nsswitch but ultimately there was always something that disobeyed the rules.

/etc/nsswitch.conf:

This should be managed by authselect. Don’t ask why. Fun fact: apparently the sssd daemon totally doesn’t need to be running for this to work. Why is DNS is entwined with an auth config management tool? Because go fuck yourself, that’s why. ~ Poettering, probably.

authselect list
authselect current
authselect show sssd
# Yields some options
authselect select sssd with-mdns4 with-mdns6

/etc/resolv.conf:

This one is managed by NetworkManager. Why is that capitalized? NFI. Go fuck yourself! ~ Probably Poettering, again.

I tried manually managing this one, no dice (to do that, stop NetworkManager, and remove the symlink). Leave it symlinked to /run/systemd/resolve/stub-resolve.conf. That’s the managed file that will always point at the local stub resolver. We can manage the actual settings with nmcli. mDNS is configured per connection, not interface, which I guess makes sense for laptops/WiFi.

nmcli connection show
# I tried this as 2 (resolve+publish) and I think it clashes with the stub resolver
nmcli conn mod enp3s0 connection.mdns 2
nmcli conn mod sugar_monster_house connection.mdns 2
# Yea it breaks v4 resolution somehow
# Not sure about this one... In theory we lose the domains config as well as our Unbound upstream,
#   but resolved should have us covered? domain search might need to happen at the origin call site though.
nmcli conn mod enp3s0 ipv4.ignore-auto-dns no
nmcli conn mod enp3s0 ipv6.ignore-auto-dns no

Oh, the stub resolver doesn’t actually run on localhost:53. It’s 127.0.0.53 (and actually .54 also, according to man 8 systemd-resolved.service). Can ya guess why? Yup. Had enough self-love yet? Keep reading.

/etc/systemd/network/*.network:

You can write files like:

[Network]
DHCP=yes
Domain=local internal

Except when I experimented resolvectl didn’t edit the file and editing the file didn’t show in resolvectl output. So go figure.

I honestly can’t keep track of what this is relative to NetworkManager. There is a service, systemd-networkd. By the way, systemd-resolved used to be controlled by systemd-resolve. It’s now resolvectl. Guess I’m not mad about that one. Now the fact that mDNS is configured per interface and not connection like before? Get fuuuuuucked. Oh and the daemon only listens on IPv4 (at least by default). GFY!

sudo resolvectl mdns enp0s3 yes
sudo resolvectl domain enp0s3 local internal
echo 'DNSStubListenerExtra=[::1]:53' | sudo tee -a /etc/systemd/resolved.conf

/etc/NetworkManager:

Whatever.

What worked in the end? Well, still getting some odd behaviour with host and IPv6 but… No files in /etc/systemd/network. Disable networkd. Resolvectl set +mdns. Symlinked /etc/resolv.conf to the resolved stub file. Configured resolved. Avahi daemon enabled and running with defaults.

sudo systemctl disable --now systemd-networkd
sudo systemctl mask systemd-networkd
sudo systemctl daemon-reload

Final /etc/systemd/resolved.conf:

[Resolve]
DNS=192.168.1.1,2403:580a:e4b1:0:aab8:e0ff:fe00:91ef
Domains=local internal
MulticastDNS=yes
DNSStubListenerExtra=[::1]:53

References:

Desktop Todo

Nix References