Moving Debian to a New Computer

These are the steps I took to migrate a Debian installation from an old computer to a new one. I took out the old SSD, and connected it via an external enclosure to the new computer, and booted via a live USB.

The next step is to copy over the entire disk from the old SSD to the new one. Because we will copy everything, even the partitions’ UUIDs will remain the same and no extra steps should be necessary apart from adjusting some partition sizes. Be very careful with the output and input devices. In my case the old SSD is connected as the external drive /dev/sdb and the new one is /dev/nvme0n1.

$ sudo dd if=/dev/sdb of=/dev/nvme0n1 bs=4K status=progress

Refresh the partition table:

$ sudo partprobe

Grow /dev/nvme0n1p3to fill the entire partition using gparted.

$ sudo cryptsetup --token-only open /dev/nvme0n1p3 new-root
$ sudo cryptsetup resize --token-only new-root 

(you can omit --token-only if you don’t use a Yubikey to unlock the drive).

Mount the btrfs root file system and resize it:

$ sudo mount -t btrfs /dev/mapper/new-root /mnt
$ sudo btrfs filesystem resize max /mnt	

Now you are ready to reboot into the new system.

Reencrypt the LUKS partition

Moving to a new SSD is also a good opportunity to rotate the master key of the LUKS encrypted root partition. This can be done while the disk is online and mounted, and takes some time.

The reencryption implementation doesn’t properly support FIDO2 keys for unlocking. We would have to delete those and re-register the keys afterwards. Select a key slot with a passphrase and pass it using the --key-slot parameter. You can check which key-slot is in use using cryptsetup luksDump

$ sudo cryptsetup reencrypt /dev/nvme0n1p3 --key-slot 1

Once done, re-enroll any FIDO2 keys you have by running the following command for each key:

$ sudo systemd-cryptenroll /dev/nvme0n1p3 --fido2-device=auto  --fido2-with-client-pin=yes

Enabling Secure Boot

Initially, I had problems with Secure Boot refusing to boot the new installation. They were resolved by reinstalling shim-signed and grub-efi-amd64-signed. Additionally, I had to enable “Allow Microsoft 3rd Party UEFI CA” in the Secure Boot settings of the UEFI:

Lenovo T14 Gen 4 Secure Boot settings

Creating a WireGuard profile for Cloudflare Warp

Connecting to Cloudflare Warp directly via wg can have advantages in flexibility or specific scenarios. For example, the Warp client, warp-cli would refuse to establish connection if it can’t override /etc/resolve.conf. By connecting directly using WireGuard, you get control over all that.

The first step is to install warp-cli and register using warp-cli register. This will create the WireGuard private-key used for the connection and register it with Cloudflare. The private key can be found in /var/lib/cloudflare-warp/reg.json. The endpoint data and Cloudflare’s public key should be constant. Alternative endpoints are listed in /var/lib/cloudflare-warp/conf.json.

An easy way to read the json configuration files is using jq:

$ sudo jq . /var/lib/cludflare-warp-/conf.json

Adjust the following template accordingly, and put in int /etc/wireguard/warp.conf:

[Interface]
PrivateKey = XXXXXXXXXXXX  
Address = 172.16.0.2/32
Address = 2606:4700:110:892f:607d:85a6:5e07:70cf/128
[Peer]
PublicKey = bmXOC+F1FxEMF9dyiK2H5/1SUtzH0JuVo51h2wPfgyo=
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = engage.cloudflareclient.com:2408

You can start the tunnel using

$ sudo wg-quick up warp`

Alternatively, you can import it to NetworkManager and be able to easily start it from the Gnome Quick Settings.

$ sudo nmcli connection import type wireguard file /etc/wireguard/warp.conf

You can easily check that the tunnel works, by visiting https://www.cloudflare.com/cdn-cgi/trace/ and looking for the line that says warp=on.

Sometimes, IPv4 won’t work while IPv6 works. Restarting the VPN several times can resolve the issue.

while ! ping -w1 -c1 1.1.1.1; do wg-quick down wgcf-profile; wg-quick up wgcf-profile; done

or using nmcli:

while ! ping -w1 -c1 1.1.1.1; do nmcli connection down wgcf-profile; nmcli connection up wgcf-profile; done

Disabling the Cloudflare client

The Cloudflare client might interfere with the Wireguard profile. It’s best to didable it:

$ sudo systemctl disable --now  warp-svc.service
$ systemctl --user disable --now warp-taskbar.service

Protecting OpenSSL private keys using PKCS#8

When generating private keys with OpenSSL, by default they are unprotected by a passphrase. For example:

$ openssl genrsa
-----BEGIN PRIVATE KEY----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCY+q1kOPM4RF5T
...
Rjet4T2TrJmFzIL1dsgJACU=
-----END PRIVATE KEY-----

generates a plaintext private key. By using the -aes256 parameters, the generated key is protected according to PKCS#8:

$ openssl genrsa -aes256
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFLTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQITQpYPEgrGN0CAggA
...
XIdi3hSPE8NoPWnROVGMnGbFOMm36g6064nZzQBD4r+4
-----END ENCRYPTED PRIVATE KEY-----

However, is it actually secure?

The default protection used by genrsa is PBKDF2 with 2048 (0x800) iterations of HMAC-SHA256. This can be verified by examining the PKCS#8 key, which is ASN.1 encoded.

$ openssl asn1parse -in key.priv 
    0:d=0  hl=4 l=1325 cons: SEQUENCE          
    4:d=1  hl=2 l=  87 cons: SEQUENCE          
    6:d=2  hl=2 l=   9 prim: OBJECT            :PBES2
   17:d=2  hl=2 l=  74 cons: SEQUENCE          
   19:d=3  hl=2 l=  41 cons: SEQUENCE          
   21:d=4  hl=2 l=   9 prim: OBJECT            :PBKDF2
   32:d=4  hl=2 l=  28 cons: SEQUENCE          
   34:d=5  hl=2 l=   8 prim: OCTET STRING      [HEX DUMP]:EAD8B97C8EAD1414
   44:d=5  hl=2 l=   2 prim: INTEGER           :0800
   48:d=5  hl=2 l=  12 cons: SEQUENCE          
   50:d=6  hl=2 l=   8 prim: OBJECT            :hmacWithSHA256
   60:d=6  hl=2 l=   0 prim: NULL              
   62:d=3  hl=2 l=  29 cons: SEQUENCE          
   64:d=4  hl=2 l=   9 prim: OBJECT            :aes-256-cbc
   75:d=4  hl=2 l=  16 prim: OCTET STRING      [HEX DUMP]:290DB8F1786A9C8667829A4A394A8FB7
   93:d=1  hl=4 l=1232 prim: OCTET STRING      [HEX DUMP]:D828C11F170D22B82801A567A9136AA293D630692B88C1F07987ED256C29DC45C4625709A0D36039B22E5A8BCA2B400C590C2560CED3629D1F8C5D77AD...CC5F8027A3A5ED533A00D626E7DDC4ABC85547

A meager 2048 SHA256 iterations provide very little added protection on top of the password intrinsic entropy.

Better protection can be achieved by using scrypt to protect the PKCS#8 encoded key. You can upgrade an existing key’s protection to scrypt:

openssl pkcs8 -in key.priv -topk8 -scrypt -out key-scrypt.priv

The command will prompt for the key’s passphrase and re-encrypt it using a key derived from the same passphrase using scrypt.

openssl pkcs8 -in key3.priv -topk8 -scrypt -out key5priv

The default openssl scrypt parameters, N=0x4000=2^14, r=8, p=1, are simply weak. They are what scrypt’s author recommended to achieve 100ms per iteration on his 2002-era laptop. While better than the PBKDF2 protection offered by OpenSSL, you might want stronger protection.

Choosing better scrypt parameters

Before we dive into the parameters, let’s understand the scrypt algorithm briefly. Scrypt takes three input parameters: password, salt, and cost parameters. The password and salt are user-provided inputs, whereas the cost parameters are fixed for a given implementation.

The cost parameters determine the computational cost of the algorithm. Higher cost parameters result in increased computational requirements, making it harder to brute-force the derived key. The cost parameters are as follows:

  • N: This is the CPU/memory cost parameter. It determines the number of iterations the algorithm performs. The value of N must be a power of two.
  • r: This is the block size parameter. It determines the size of the blocks of memory used during the algorithm’s execution.
  • p: This is the parallelization parameter. Doesn’t affect the memory consumption of the algorithm. However, despite it’s name, OpenSSL doesn’t perform any parallelization, instead opting to run the algorithm serially.

The original article recommends r=8 and p=1 as a good ratio between memory and CPU costs. This allows us to scale N almost arbitrary, increasing both CPU and memory costs. However, OpenSSL, by default, limits it’s memory usage.

$ openssl pkcs8 -in key.priv -topk8 -scrypt -scrypt_N 0x8000 -out key-scrypt2.priv
Enter pass phrase for key.priv:
Error setting PBE algorithm
40E778DD277F0000:error:030000AC:digital envelope routines:scrypt_alg:memory limit exceeded:../providers/implementations/kdfs/scrypt.c:482:
40E778DD277F0000:error:068000E3:asn1 encoding routines:PKCS5_pbe2_set_scrypt:invalid scrypt parameters:../crypto/asn1/p5_scrypt.c:59:

This limitation severely our parameters choice. The only option to increase security, is through the CPU cost parameter (p). We can set it (almost) arbitrary large, getting added security. For example:

$ openssl pkcs8 -in MOK.priv -topk8 -scrypt -scrypt_p 32 -passin pass:1234 -passout pass:1234 -out key-scrypt3.priv

Split DNS using systemd-resolved

Many corporate environments have internal DNS servers that are required to resolve internal resources. However, you might prefer a different DNS server for external resources, for example 1.1.1.1 or 8.8.8.8. This allows you to use more secure DNS features like DNS over TLS (DoT). The solution is to set up systemd-resolved as your DNS resolver, and configure it for split DNS resolving.

Starting with systemd 251, Debian ships systemd-resolved as a separate package. If it isn’t installed, go ahead and install it.

$ sudo apt install systemd-resolved
$ sudo systemctl enable --now systemd-resolved.service

Create the following configuration file under /etc/systemd/resolved.conf.d/99-split.conf:

[Resolve]
DNS=1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com

Domains=~.
DNSOverTLS=opportunistic

Domains=~. gives priority to the global DNS (1.1.1.1 in our case) over the link-local DNS configurations which are pushed through DHCP (like internal DNS servers).

DNSOverTLS=opportunistic defaults to DNS over TLS but allows fallback to regular DNS. This is useful when corporate DNS doesn’t support DNS over TLS and you still want to resolve corporate internal domains.

Restart systemd-resolved to reload the configuration:

$ sudo systemctl restart systemd-resolved

The final step is to redirect programs relying on /etc/resolv.conf (possibly through the glibc API) to the systemd-resolved resolver. The recommended way according to the systemd-resolved man page is to symlink it to /run/systemd/resolv/stup-resolv.conf.

$ sudo ln -rsf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf

F5 VPN

F5 VPN doesn’t play well with the above configuration. First, F5 VPN tries to overwrite the DNS configuration in /etc/resolv.conf, by removing the existing file and replacing it with its own (pushing corporate DNS server configuration through it). The solution is to prevent F5 VPN from deleting the /etc/resolv.conf, by setting it to immutable. However, we cannot chattr +i a symlink. We have to resort to copying the configuration statically, and then protect it.

$ sudo cp /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
$ sudo chattr +i /etc/resolv.conf

Finally, because now F5 VPN can’t update the DNS configuration, we would have to manually configure the corporate DNS servers and the search domains.

$ sudo resolvectl dns tun0 192.168.100.20 192.168.100.22
$ sudo resolvectl domain tun0 ~example.corp ~example.local

Update: See Automating DNS Configurations for F5 VPN Tunnel using Systemd-resolved and NetworkManager-dispatcher for a script that automates the configuration.

Signing kernel modules for Secure Boot

Some time ago, I needed to use the v4l2loopback module. It can be installed via:

$ sudo apt install v4l2loopback-dkms

Normally, after installing a module, you can just modprobe it, and it will load. However, due to Secure Boot, it will fail.

$ sudo modprobe v4l2loopback 
modprobe: ERROR: could not insert 'v4l2loopback': Operation not permitted

The problem is that the v4l2loopback isn’t signed. For example, compare the output of:

$ /usr/sbin/modinfo -F signer v4l2loopback

which is empty, versus

$ /usr/sbin/modinfo -F signer xor
Debian Secure Boot CA

The solution would be to sign the v4l2loopback module ourselves.

Creating a key

The update-secureboot-policy script available in Ubuntu’s shim-signed package is able to generate Machine Owner Keys (MOK) by itself. However, the currently available in Debian Unstable doesn’t have the key generation functionality. We can either fetch the Ubuntu version or generate the keys ourselves.

$ wget https://git.launchpad.net/ubuntu/+source/shim-signed/plain/update-secureboot-policy
$ chmod +x ./update-secureboot-policy
$ sudo ./update-secureboot-policy --new-key

Or through generating the keys ourselves:

$ sudo mkdir -p /var/lib/shim-signed/mok
$ cd /var/lib/shim-signed/mok/
$ sudo openssl genrsa -aes256 -out MOK.priv
$ sudo openssl req \
        -subj "/CN=`hostname -s | cut -b1-31` Secure Boot Module Signature key" \
        -new -x509 -nodes -days 36500 -outform DER \
        -key MOK.priv \
        -out MOK.der

Write down the passphrase for your private key. You will need it whenever you want to sign drivers.

Now we enroll the newly created key:

$ sudo mokutil --import MOK.der

You will be prompted for a password. This password will be required after reboot in order to complete the key enrollment, you will not need it afterwards.

After reboot, check that your key was indeed enrolled:

$ mokutil --list-enrolled

Signing the module

We need to put the passphrase for the private key in the KBUILD_SIGN_PIN env variable:

$ read -s KBUILD_SIGN_PIN
$ export KBUILD_SIGN_PIN

Now we can do the actual signing:

$ cd /usr/lib/modules/$(uname -r)/updates/dkms
$ sudo --preserve-env=KBUILD_SIGN_PIN /usr/lib/linux-kbuild-$(uname -r | cut -d. -f1-2)/scripts/sign-file sha256 /var/lib/shim-signed/mok/MOK{.priv,.der} v4l2loopback.ko

You will need to repeat this step for every new kernel that you install.

Setting up WireGuard on Debian

WireGuard is a modern VPN solution, which is much easier to configure than OpenVPN and its likes. In this tutorial, we assume a simple setup where we want to route all clients network traffic through the VPN, exiting through the server.

Server configuration

$ sudo apt install wireguard
$ PRIVATE_KEY=$(wg genkey)

Now we create the configuration file for our tunnel (wg0).

$ cat <<EOF | sudo tee /etc/wireguard/wg0.conf
[Interface]
PrivateKey = $PRIVATE_KEY
Address = 10.8.0.1/24, fd0d:d325:45c0::1/64
ListenPort = 51820
SaveConfig = true

PostUp = iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PostUp = ip6tables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PreDown = ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
EOF

If you have a firewall, you’ll need to open up UDP port 51820 (or whatever configured as the ListenPort).

Enable IPv4 and IPv6 forwarding, so we can route all the client traffic through the server (and reload sysctl configuration)

$ echo net.ipv4.ip_forward=1 | sudo tee /etc/sysctl.d/40-wireguard.conf
$ echo net.ipv6.conf.all.forwarding=1 | sudo tee -a /etc/sysctl.d/40-wireguard.conf
$ sudo sysctl --system 

Enable and start the WireGuard service:

$ sudo systemctl enable --now wg-quick@wg0.service

Finally, take note of the server public key (it will be a short base64 encoded string):

$ echo $PRIVATE_KEY | wg pubkey

You’ll need it in the next step.

Peer configuration

$ sudo apt install wireguard
$ PRIVATE_KEY=$(wg genkey)
$ PUBLIC_KEY=$(echo $PRIVATE_KEY | wg pubkey)
$ SERVER_PUBLIC_KEY=6TDw+U2WFhkaKUy/xXrCRtuZvB2m2SFN7URZA5AkGis=
$ SERVER_IP=8.8.8.8

Replace the value of SERVER_PUBLIC_KEY with the public key of your server, and SERVER_IP with the correct IP address of your server.

Edit /etc/wireguard/wg0.conf

$ cat <<EOF | sudo tee /etc/wireguard/wg0.conf
[Interface]
PrivateKey = $PRIVATE_KEY
Address = 10.8.0.2/24, fd0d:d325:45c0::2/64

[Peer]
PublicKey = $SERVER_PUBLIC_KEY
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = $SERVER_IP:51820
EOF

Setting AllowedIPs = 0.0.0.0/0, ::/0 will route all traffic through the VPN connection. If you don’t want to do that, edit the configuration file, and set AllowedIPs = 10.8.0.0/24, fd0d:d325:45c0::0/64.

We need to make the server aware of the peer. The following command should be executed on the server.

$ sudo wg set wg0 peer $PUBLIC_KEY allowed-ips 10.8.0.2,fd0d:d325:45c0::2

where PUBLIC_KEY is the value of the client’s public key (stored in the $PUBLIC_KEY environment variable).

The /etc/wireguard/wg0.conf will be updated according and make the added peer configuration persistent due to the SaveConfig = true. The configuration update will take place the next time the WireGuard interface goes down.

You can also import the profile into NetworkManager and easily enable/disable the new interface via GNOME:

$ sudo nmcli connection import type wireguard file /etc/wireguard/wg0.conf

Android Peer

It is also useful to have WireGuard on the phone. WireGuard supports both iOS and Android, and the setup should be similar in both cases. Start by installing WireGuard from the Play Store. The next step is to generate the required configuration. It can be done directly on the phone, or by creating a configuration file on your computer and transferring it, which I find simpler.

$ PRIVATE_KEY=$(wg genkey)
$ PUBLIC_KEY=$(echo $PRIVATE_KEY | wg pubkey)
$ SERVER_PUBLIC_KEY=6TDw+U2WFhkaKUy/xXrCRtuZvB2m2SFN7URZA5AkGis=
$ SERVER_IP=8.8.8.8
$ cat <<EOF > wg0.conf
[Interface]
PrivateKey = $PRIVATE_KEY
Address = 10.8.0.3/24, fd0d:d325:45c0::3/64

[Peer]
PublicKey = $SERVER_PUBLIC_KEY
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = $SERVER_IP:51820
EOF

Again, make the server aware of the client by running the following command on the server:

$ sudo wg set wg0 peer $PUBLIC_KEY allowed-ips 10.8.0.3,fd0d:d325:45c0::3

Transfer the configuration file wg0.conf to the phone and load it using the WireGuard app.

Update 2023-11-30: Added IPv6 configuration.

Creating FIDO2 SSH keys using ssh-keygen

$ ssh-keygen -t ecdsa-sk -f ~/.ssh/id_ecdsa_sk
  • -t ecdsa-sk specifies the key type to generate. Alternatively, you can generate Ed25519 keys using -t ed25519-sk.
  • -f ~/.ssh/id_ecdsa_sk specify the output path for the newly generated key.

You can provide a passphrase for your key if you would like to do so. Unlike normal ssh keys, the private key is not that sensitive, as it is useless without the physical security key itself.

ed25519-sk vs ecdsa-sk

Newer YubiKeys (firmware >=5.2.3) and some other FIDO2 keys, support Ed25519 keys. Ed25519 have some advantages over the common ECDSA keys in several aspects:

  • Ed25519 is based on the Curve25519 vs NIST P-256 used for ecdsa-sk. Curve25519 is generally regarded as faster and safer than NIST P-256, see SafeCurves. Furthermore, the underlying signature algorithm (Schnorr vs DSA) is slightly faster for Ed25519
  • EdDSA in general, and Ed25519 in particular, uses deterministic nonce versus random nonce used by ECDSA. This means that ECDSA is prune to catastrophic entropy failure (see the famous fail0verflow PS3 hack as an example). Assuming your key has access to high entropy randomness, that shouldn’t be a problem. However, that assumption might turn out false, like in the case of the reduced initial randomness in the Yubikey FIPS Series.

In the bottom line, if you have access to a key that supports ed25519-sk then it’s preferable to use it. If you don’t, that’s not something to worry about to much. There are probably weaker points in your threat model anyway.

If your FIDO2 key doesn’t support ed25519-sk you will get the following error when trying to generate a key:

Key enrollment failed: requested feature not supported

Moving keys to a new computer

If you want to use the keys on a new computer, you will have to copy over the private key file that you generated. That will normally be ~/.ssh/id_ecdsa_sk or ~/.ssh/id_ed25519_sk, depending on the type of key you generated.

Alternatively, you can generate resident keys which are completely stored on the YubiKey. To generate resident keys, append the -O resident to your ssh-keygen command. Example:

$ ssh-keygen -t ecdsa-sk -O resident

To import the keys to a new device, use the -K option:

$ ssh-keygen -K

This will download all the keys (public and private) from the YubiKey to the current directory. There is no need to manually transfer any key files.

YubiKey Series 5 devices can hold up to 25 resident keys.

Unlock LUKS volume with a YubiKey

Update: The dracut configuration has been updated and now udev consistently recognizes the YubiKey in the initramfs.

Unlocking LUKS encrypted drives with a YubiKey has been supported since systemd 248. In Debian, systemd>=250 is required, as the feature has not been enabled in prior versions. This tutorial is geared towards Yubikeys, but it should work with slight modifications with any other FIDO2 token.

YubiKey series 5 and later should support the hmac-secret extension. You can make sure your Yubikey supports the needed hmac-secret extension by querying it with ykman:

$ ykman --diagnose 2>&1 | grep hmac-secret

Backup your LUKS header

In case you mess anything up, you would need a backup of your LUKS header. Remember to save your backup to some external storage, so you can actually access it if anything goes sideways.

# cryptsetup luksHeaderBackup /dev/nvme0n1p3 --header-backup-file /media/guyru/E474-2D80/luks_backup.bin

Set FIDO2 PIN

We would like to set a FIDO2 PIN for the Yubikey, so unlocking the encrypted drive would require both the physical Yubikey and the PIN. You can set the PIN using:

$ ykman fido access change-pin

Enroll the Yubikey

Start by verifying that systemd-cryptenroll can see and can use your YubiKey:

$ systemd-cryptenroll --fido2-device=list
PATH         MANUFACTURER PRODUCT
/dev/hidraw0 Yubico       YubiKey FIDO+CCID

Now, enroll the Yubikey, replacing /dev/nvme0n1p3 with the block device of the LUKS encrypted drive.

$ sudo systemd-cryptenroll /dev/nvme0n1p3 --fido2-device=auto  --fido2-with-client-pin=yes
🔐 Please enter current passphrase for disk /dev/nvme0n1p3: (no echo)
Initializing FIDO2 credential on security token.
👆 (Hint: This might require confirmation of user presence on security token.)
🔐 Please enter security token PIN: (no echo)
Generating secret key on FIDO2 security token.
👆 In order to allow secret key generation, please confirm presence on security token.
New FIDO2 token enrolled as key slot 0.

Modify /etc/crypttab

We need to modify /etc/crypttab in order to tell cryptsetup to unlock the device using the YubiKey. Add fido2-device=auto in the options field of the crypttab entry for your device. For example:

nvme0n1p3_crypt UUID=307a6bef-5599-4963-8ce0-d9e999026c1a none luks,discard,fido2-device=auto

Switch to dracut

Debian’s default initramfs generator, update-initramfs of the initramfs-tools is using the old cryptsetup for mounting encrypted drives. However, cryptsetup doesn’t recognize the fido2-device option. Running update-initramfs will fail with the following error:

$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.15.0-3-amd64
cryptsetup: WARNING: nvme0n1p3_crypt: ignoring unknown option 'fido2-device'

This is unfortunate. The simplest solution is to switch to dracut, a more modern initramfs generator, which among other things relies on systemd to activate encrypted volumes. This solves the issue of the unknown fido2-device.

Before installing dracut, I would highly recommend creating a copy of the existing initramfs in the boot partition in case something goes wrong.

$ sudo apt install dracut

Dracut includes systemd-cryptsetup by default. systemd-cryptsetup depends on libfido for unlocking devices using FIDO2 tokens. At least in Debian, systemd-cryptsetup dynamically loads libfido2.so (as opposed to being dynamically linked), which causes dracut not to have libfido2.so in the initramfs. This causes systemd-cryptsetup to issue the following error upon boot:

FIDO2 tokens not supported on this build. 

We fix it by manually adding libfido2.so to the initramfs. Of course, we also need to include libfido2’s dependencies as well. Dracut has a mechanism for automatically adding dependencies for executables, but it doesn’t work on libraries. As a workaround, instead of adding libfido2 directly, we will add an executable that depends on libfido2, which will add libfido2 and its dependencies to the initramfs. We will usefido2-token from the fido2-tools package for this trick.

$ sudo apt install fido2-tools
$ cat << EOF | sudo tee /etc/dracut.conf.d/11-fido2.conf
## Spaces in the quotes are critical.
# install_optional_items+=" /usr/lib/x86_64-linux-gnu/libfido2.so.* "

## Ugly workround because the line above doesn't fetch
## dependencies of libfido2.so
install_items+=" /usr/bin/fido2-token "

# Required detecting the fido2 key
install_items+=" /usr/lib/udev/rules.d/60-fido-id.rules /usr/lib/udev/fido_id "
EOF

Now, recreate the initramfs images:

$ sudo dracut -f

Last remarks

At this point, we are done. Reboot you’re machine and it will prompt you for your YubiKey and allow you to unlock your LUKS encrypted root patition with it. If you don’t have your YubiKey, it will give the following prompt:

Security token not present for unlocking volume root (nvme0n1p3_crypt), please plug it in.

After around 30 seconds, it would time out and display the following message:

Timed out waiting for security device, aborting security device based authentication attempt.

Afterwards, it would allow you to unlock the partition using a password (or a recovery key).

In case you run into any trouble, append rd.break=initqueue to the kernel command line, and dracut will enter a shell before attempting to mount the partitions. You can manually mount the drive using the following command:

# /usr/lib/systemd/systemd-cryptsetup attach root /dev/nvme0n1p3

Exit the emergency shell, and the system will continue its normal boot.

Replacing PulseAudio with PipeWire 0.3.30

Replacing PulseAudio with Pipewire became much simpler recently with PipeWire 0.3.30 and requires less configuration. I’m going to go through the updated routine. You can read the original post for more explanations.

The new version is still only available in experimental as of today.

$ sudo apt install -t experimental pipewire-pulse pipewire-audio-client-libraries libspa-0.2-bluetooth

The pipewire-pulse package takes care of most of the configuration that was previously needed, like touching with-pulseaudio or manually creating the systemd service files.

$ systemctl --user daemon-reload
$ systemctl --user disable pulseaudio.socket pulseaudio.service
$ systemctl --user stop pulseaudio.socket pulseaudio.service
$ systemctl --user mask pulseaudio.service pulseaudio.socket
$ systemctl --user enable --now pipewire pipewire-pulse pipewire-media-session

Don’t remove the PulseAudio packages yet. While not being used, some packages still depend specifically on PulseAudio and might break. See the original post for more details.

Enable mSBC and SBC XQ

One of the main advantages of PipeWire is proper support for better sounding bluetooth audio profiles, and specifically mSBC and SBC XQ. Copy /usr/share/pipewire/media-session.d/bluez-monitor.conf to ~/.config/pipewire/media-session.d/bluez-monitor.conf (or to /etc/pipewire/media-session.d/bluez-monitor.conf) and in the properties section add the following lines:

bluez5.msbc-support   = true
bluez5.sbc-xq-support = true

Replacing PulseAudio with PipeWire

PipeWire is a multimedia server, best known for it’s video support in Wayland. It also provides an audio server which can replace PulseAudio. The appeal, for me at least, to switch over from PulseAudio to PipeWire stems from PipeWire’s better support of bluetooth audio, and especially support for modern A2DP codecs such as AptX, AptX HD and LDAC.

Starting with PipeWire 0.3.20 introduced native mSBC support. This profile support mSBC codec versus CSVD supported by the older HSP/HFP profiles. The difference is significant, as the CSVD only supported narrow band speech (NBS, 8kHz) compared with mSBC support for wide band speech (WBS, 16kHz). That is the difference between 90’s era call quality sound and modern call quality sound.

Update: PipeWire-0.3.30 made the replacement process simpler. See my updated post.

Installing PipeWire 0.3.23

As of writing this post, Debian Unstable only has PipeWire 0.3.19. We are going to install PipeWire from the experimental repo so we get the PipeWire 0.3.23 with the support for mSBC.

We start by enabling the experimental repo

$ sudo apt-add-repository "deb http://deb.debian.org/debian experimental main
$ sudo apt update

Install PipeWire from experimental:

$ sudo apt install -t experimental pipewire-audio-client-libraries libspa-0.2-bluetooth

(pipewire-audio-client-libraries will pull pipewire itself as a dependency)

Substituting PipeWire for PulseAudio

These instructions are based on the ones from Debian Wiki, Arch Wiki and Gentoo Wiki. Create the file

$ sudo touch /etc/pipewire/media-session.d/with-pulseaudio

It will instruct PipeWire to handle Bluetooth audio devices.

Copy the pipewire-pulse systemd service:

$ sudo cp /usr/share/doc/pipewire/examples/systemd/user/pipewire-pulse.{service,socket} /etc/systemd/user

Disable PulseAudio services and enable the PipeWire ones

$ systemctl --user disable pulseaudio.socket pulseaudio.service
$ systemctl --user stop pulseaudio.socket pulseaudio.service
$ systemctl --user enable pipewire pipewire-pulse
$ systemctl --user start pipewire pipewire-pulse

If everything worked well pactl info should report Server Name: PulseAudio (on PipeWire 0.3.23):

$ pactl info | grep "Server Name"
Server Name: PulseAudio (on PipeWire 0.3.23)

If not, you might need to restart (PulseAudio tends to be rather persistent). In case PulseAudio still doesn’t play nicely, you should mask it:

$ systemctl --user mask pulseaudio.service pulseaudio.socket
$ systemctl --user stop pulseaudio.service pulseaudio.socket

Removing PulseAudio completely is not a good move at this point in time. Some packages depend on it, although they could work with PipeWire just as well. For example, when I remvoed PipeWire libcanberra-pulse got removed as well which caused system notification sounds to break. Alternatively you could try to replace the PulseAudio package with a dummy using equivs but that seems like more effort than keeping the package.

Enabling mSBC and SBC XQ

Edit /etc/pipewire/media-session.d/bluez-monitor.conf and uncomment the following lines:

bluez5.msbc-support   = true
bluez5.sbc-xq-support = true

This will enable both mSBC and SBC XQ.

You can test that you’re headset is connected via mSBC using pw-cli info:

$ guyru@gdebian3:~$ pw-cli info all | grep bluez
info: unsupported type PipeWire:Interface:Profiler
info: unsupported type PipeWire:Interface:Metadata
info: unsupported type PipeWire:Interface:Metadata
*		device.api = "bluez5"
*		device.name = "bluez_card.94_DB_56_AC_36_52"
*		api.bluez5.path = "/org/bluez/hci0/dev_94_DB_56_AC_36_52"
*		api.bluez5.address = "94:DB:56:AC:36:52"
*		api.bluez5.device = ""
*		api.bluez5.class = "0x240404"
*		api.bluez5.transport = ""
*		api.bluez5.profile = "headset-head-unit"
*		api.bluez5.codec = "mSBC"
*		api.bluez5.address = "94:DB:56:AC:36:52"
*		node.name = "bluez_input.94_DB_56_AC_36_52.headset-head-unit"
*		factory.name = "api.bluez5.sco.source"
*		device.api = "bluez5"
*		api.bluez5.transport = ""
*		api.bluez5.profile = "headset-head-unit"
*		api.bluez5.codec = "mSBC"
*		api.bluez5.address = "94:DB:56:AC:36:52"
*		node.name = "bluez_output.94_DB_56_AC_36_52.headset-head-unit"
*		factory.name = "api.bluez5.sco.sink"
*		device.api = "bluez5"

In case mSBC is not supported you’ll see api.bluez5.codec = "CSVD" (and you’lll probably hear the difference).

Errors

Problem: Connecting to bluetooth headset fails, and the following error appears in journalctl:

bluetoothd[41893]: src/service.c:btd_service_connect() a2dp-sink profile connect failed for 94:DB:56:AC:36:52: Protocol not available

Solution: You’re missing the libspa-0.2-bluetooth package. Install it and restart PipeWire:

$ sudo apt install -t experimental libspa-0.2-bluetooth
$ systemctl --user restart pipewire pipewire-pulse

Problem: ALSA programs fail with the following error:

ALSA lib pcm_dmix.c:1075:(snd_pcm_dmix_open) unable to open slave
aplay: main:830: audio open error: Device or resource busy

Solution: You need to enable the ALSA backend for PipeWire:

$ sudo touch /etc/pipewire/media-session.d/with-alsa
$ systemctl --user restart pipewire pipewire-pulse