Stay frosty like Tony
A home for my system configurations and home lab using Nix Flakes.
Be warned, I’m still learning and experimenting.
Features:
Todo:
net.inet.tcp.tso
for VM safety/perfFeatures:
Todo:
Features:
Todo:
Features:
Todo:
KubeletPSI
feature/etc/kubernetes/pki
buildEnv
over devShell
See also:
Pre-requisites:
systemctl reboot --firmware-setup
~ Future Ariel.HpSetup.txt
OR follow along the rest.root
nix-shell
to obtain Git and Helix/etc/nixos/hardware-configuration.nix
.
Place it in the machine’s hardware-configuration.nix
in the flake repo.
(This step may no longer be necessary)F10
to enter BIOS, boot update.
Enter 1
and it should update.curl https://github.com/arichtman.keys >> ~/.ssh/authorized_keys
/etc/network/interfaces
and switch the virtual bridge network configuration from manual
to dhcp
.efibootmgr -v
.
If GRUB, sed -i -r -e 's/(GRUB_CMDLINE_LINUX_DEFAULT=")(.*)"/\1\2 intel_iommu=on"/' /etc/default/grub
echo 'vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd' >> /etc/modules
apt install prometheus-node-exporter
.apt install avahi-daemon
.apt install grub-efi-amd64
.systemctl disable openipmi
.If I check /etc/grub.d/000_ proxmox whatever it says update-grub
isn’t the way and to use proxmox-boot-tool refresh
.
It also looks like there’s a specific proxmox grub config file under /etc/default/grub.d/proxmox-ve.cfg
.
I don’t expect it hurts much to have iommu on as a machine default, and we’re not booting anything else…
Might tidy up the sed config command though.
Looking at the systemd we could probably do both without harm.
That one’s also using the official proxmox command.
References:
We did run mkfs -t ext4
but it didn’t allow us to use the disk in the GUI.
So using GUI we wiped disk and initialized with GPT.
For the USB rust bucket we found the device name with fdisk -l
.
Then we
Never mind, same dance with the GUI, followed by heading to Node > Disks > Directory and creating one.mkfs -t ext4 /dev/sdb
, followed by a mount /dev/sdb /media/backup
.
Use blkid
to pull details and populate a line in /etc/fstab
for auto remount of backup disk.
Ref
/etc/fstab
:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=C61A-7940 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=b35130d3-6351-4010-87dd-6f2dac34cfba /mnt/pve/Backup ext4 defaults,nofail,x-systemd.device-timeout=5 0 2
I used this to shift OPNsense to 999 and any templates to >=1000.
lvs -a
lvrename prod vm-100-disk-0 vm-999-disk-0
/etc/pve/nodes/proxmox/qemu-server
nix-shell -p cloud-utils
growpart /dev/sda 1
resize2fs /dev/sda1
/var/lib/vz/template/iso
opkg update
or maybe system > firmwaredig +trace @4.4.4.4 bing.com
Follow one of the 6000 tutorials AKA yes, I forgot to document it.
Follow tutorial AKA forgot to document it.
See also wg0.conf
in this repo.
/usr/local/etc/rc.syshook.d/update/99-alacritty-terminal
:
#!/bin/sh
# Configures terminal for Alacritty
curl -sSL https://raw.githubusercontent.com/alacritty/alacritty/master/extra/alacritty.info -o alacritty.info
infotocap alacritty.info >> /usr/share/misc/termcap
cap_mkdb /usr/share/misc/termcap
rm alacritty.info
Nginx configuration on OPNsense requires modification for Kanidm Oauth to work!
server
block for the Kanidm HTTP serverinclude $UUID_post/*.conf
, note the directory name./usr/local/etc/nginx
, and add a .conf
file with the following:
proxy_pass_header X-KANIDM-OPID;
/usr/local/tftp
and download netboot.xyz.kpxe
.
I also downloaded netboot.xyz.efi
for good measure.
Enable TFTP and set listening IP to 0.0.0.0
.
This defaulted to 127.0.0.1
which may have worked but I didn’t test.os-wol
to wake on lan.
Add all physical machines to the list of known, you can use ISC DHCP leases to find all the MACs in one place.Notes:
I will revisit the resources supplied after running the box for a bit.
CPU seems fine, spikey with what I think are Python runtime startups from the control layer. RAM looks consistently under about 1Gb so I’ll trim that back from the recommended minimum 2Gb. We’re doing pretty well on space too but I’m less short on that.
References:
999
.
Enable start on boot and disable unique feature./etc/nixos/configuration.nix
security.sudo.wheelNeedsPassword = false
/dev/sda
mkdir ~~/.ssh && curl https://github.com/arichtman.keys -o ~/.ssh/authorized_keys && chmod 600 ~~/.ssh/authorized_keys
sudo nixos-rebuild switch --upgrade --upgrade-all
sudo nix-rebuild list-generations
sudo rm /nix/var/nix/profiles/system-#-profile
sudo nix-collect-garbage --delete-old
Arguably this mingles with substratum, as PKI/trust/TLS is required or very desirable for VPN/HTTPS etc. SPIFFE/SPIRE will address this somewhat.
xkcdpass --delimiter - --numwords 4 > root-ca.pass
step certificate create "ariel@richtman.au" ./root-ca.pem ./root-ca-key.pem --profile root-ca --password-file ./root-ca.pass
fetchUrl
callgarage layout assign --zone garage.services.richtman.au --capacity 128GB $(garage node id 2>/dev/null)
garage layout apply --version 1
export KANIDM_URL=https://id.richtman.au
export GRAFANA_FQDN=grafana.services.richtman.au
kanidm system oauth2 create grafana $GRAFANA_FQDN https://$GRAFANA_FQDN
kanidm system oauth2 set-landing-url grafana "https://${GRAFANA_FQDN}/login/generic_oauth"
kanidm group create 'grafana_superadmins'
kanidm group create 'grafana_admins'
kanidm group create 'grafana_editors'
kanidm group create 'grafana_users'
kanidm system oauth2 update-scope-map grafana grafana_users email openid profile groups
kanidm system oauth2 enable-pkce grafana
kanidm system oauth2 update-claim-map-join 'grafana' 'grafana_role' array
kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_superadmins' 'GrafanaAdmin'
kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_admins' 'Admin'
kanidm system oauth2 update-claim-map 'grafana' 'grafana_role' 'grafana_editors' 'Editor'
# Note: I'm not sure I need *all* of these
kanidm group add-members grafana_superadmins arichtman
kanidm group add-members grafana_admins arichtman
kanidm group add-members grafana_editors arichtman
kanidm group add-members grafana_users arichtman
kanidm system oauth2 get grafana
kanidm system oauth2 show-basic-secret grafana
Grafana side: Official docs examples
# Create a client certificate with admin
step certificate create cluster-admin cluster-admin.pem cluster-admin-key.pem \
--ca ca.pem --ca-key ca-key.pem --insecure --no-password --template granular-dn-leaf.tpl --set-file dn-defaults.json --not-after 8760h \
--set organization=system:masters
# Construct the kubeconfig file
# Here we're embedding certificates to avoid breaking stuff if we move or remove cert files
kubectl config set-cluster home --server https://fat-controller.systems.richtman.au:6443 --certificate-authority ca.pem --embed-certs=true
kubectl config set-credentials home-admin --client-certificate cluster-admin.pem --client-key cluster-admin-key.pem --embed-certs=true
kubectl config set-context --user home-admin --cluster home home-admin
openssl genpkey -out klient-key.pem -algorithm ed25519
openssl req -new -config klient.csr.conf -key klient-key.pem -out klient.csr
export KLIENT_CSR=$(base64 klient.csr | tr -d "\n")
envsubst -i klient-csr.yaml | kubectly apply -f -
kubectl certificate user approve
For security reasons, it’s not possible for nodes to self-select roles.
We can label our nodes using label.sh
.
hmmm, deleting the nodes (reasonably) removes labels. …and since they can’t self-identify, we have to relabel every time. I expect taints would work the same way, so we couldn’t use a daemonset or spread topology with labeling privileges since it wouldn’t know what to label the node. Unless… we deploy it with a configMap? That’s kinda lame. I suppose all the nodes that need this are dynamic, ergo ephemeral and workers, so we could make something like that. Heck, a static pod would work fine for this and be simple as. But then it’d be a pod, which is a continuous workload, which we really don’t need. A job would suit better, but then it’s like, why even run this on the nodes themselves? Have the node self-delete (it’ll self-register again anyway), and have the admin box worry about admin like labelling. I wonder if there’s any better way security-wise to have nodes be trusted with certain labels. Already they need apiServer-trusted client certificates, it’d be cool if the metadata on those could determine labels.
There is a way to tell the Kubelet to register with labels but it’s limited to a specific group. I doubt the Kubelet has an option to open that up and since we’re getting denied even starting the binary it’s probably not settable on the APIserver.
kubectl get csr --no-headers -o jsonpath='{.items[*].metadata.name}' | xargs -r kubectl certificate approve
Checking builds manually: nix build .#nixosConfigurations.fat-controller.config.system.build.toplevel
Minimal install ~3.2 gigs
Lab-node with master node about 3.2 gb also, so will want more headroom.
Add to nomicon
Using tasker
Profile: AutoPrivateDNS
State: Wifi Connected [ SSID:sugar_monster_house MAC:* IP:* Active:Any ]
Enter: Anon
A1: Custom Setting [ Type:Global Name:private_dns_mode Value:off Use Root:Off Read Setting To: ]
Exit: Anon
A1: Custom Setting [ Type:Global Name:private_dns_mode Value:hostname Use Root:Off Read Setting To: ]
Secure setting accessibility_display_daltonizer_enabled
to 0
or 1
for color toggle.
nix shell nixpkgs#android-tools -c adb shell pm grant net.dinglisch.android.taskerm android.permission.WRITE_SECURE_SETTINGS
References:
Trust chain system install:
sudo security add-trusted-cert -r trustRoot -k /Library/Keychains/System.keychain -d ~/Downloads/root-ca.pem
OPNsense/openssl’s ciphers are too new, to install client certificate you may need to pkcs12 bundle legacy.
openssl pkcs12 -export -legacy -out Certificate.p12 -in certificate.pem -inkey key.pem
softwareupdate -ia
softwareupdate --install-rosetta --agree-to-license
I didn’t explicitly install it but it’s on there somehow now.
There was some mention that it auto-installs if you try running x86_64 binaries.sudo mv /etc/nix/nix.conf /etc/nix/.nix-darwin.bkp.nix.conf
nix-build https://github.com/LnL7/nix-darwin/archive/master.tar.gz -A installer
./result/bin/darwin-installer
edit default configuration.nix? n
# Accept the option to manage nix-darwin using nix-channel or else it bombs
manage using channels? y
add to bashrc y
add to zshrc? y
create /run? y
# a nix-channel call will now fail
Bootstrapping:
nix build github:arichtman/nix#darwinConfigurations.macbook-pro-work.system
./result/sw/bin/darwin-rebuild switch --flake .#macbook-pro-work
./result/sw/bin/darwin-rebuild switch --flake github:arichtman/nix
To do: look into Nix VMs on Mac
some very wip notes about the desktop.
bluetoothctl trust $MAC
to try and start off autoconnectsudo visudo
and swap the commented lines for wheel to enable NOPASSWD
.trusted-users = @wheel
to /etc/nix/nix.custom.conf
(DetSys thing not to use /etc/nix/nix.conf
).
Note: might be able to specify this at install time…Enable composefs transient root, then install DetSys Nix (for SELinux support).
/etc/ostree/prepare-root.conf
:
[composefs]
enabled = yes
[root]
transient = true
Then sudo rpm-ostree initramfs-etc --reboot --track=/etc/ostree/prepare-root.conf
.
Reference
nix develop
to bootstraphome-manager switch --flake . -b backup
sudo curl https://www.richtman.au/root-ca.pem -o /etc/pki/ca-trust/source/anchors/root-ca.pem
sudo update-ca-trust
sudo usermod --shell $(which zsh) arichtman
.
Note: not sure how this is going, obvs that path isn’t in /etc/shells
, but I can’t see any bash-default-shell
in rpm-ostree
.
Reboot and see if it applies on login.sudo rpm-ostree install -y --idempotent zsh alacritty
Fix failure to wake from sleep.
/usr/lib/systemd/system/service.d/50-keep-warm.conf
:
# Disable freezing of user sessions to work around kernel bugs.
# See https://bugzilla.redhat.com/show_bug.cgi?id=2321268
[Service]
Environment=SYSTEMD_SLEEP_FREEZE_USER_SESSIONS=0
tracker-miner-fs-3.service
nano-default-editor
rpm-ostree override remove
systemd-remount-fs.service
nixpkgs
and NixOS-WSL
source Nix files