Short cryptsetup/LUKS tutorial

This short tutorial will guide you in encrypting a drive with cryptsetup and LUKS scheme.

Before starting, if the device had previous data on it, it’s best to delete any filesystem signatures that may be on it. Assuming that the drive we operate is /dev/sda you can use the following command to remove the signatures:

$ sudo wipefs --all /dev/sda --no-act

Remove the --no-act flag to actually modify the disk.

The next step is to actually format the drive using LUKS. This is done using the cryptsetup utility.

$ sudo cryptsetup luksFormat --type=luks2 /dev/sda

WARNING!
========
This will overwrite data on /dev/sda irrevocably.

Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sda: 
Verify passphrase: 

The command will prompt you to enter a passphrase for the encryption and should take a few seconds to complete.

The next step is to add an appropriate entry to crypttab which will simplify starting the dm-crypt mapping later. Add the following line to /etc/crypttab:

archive_crypt UUID=114d42e5-6aeb-4af0-8758-b4cc79dd1ba0 none luks,discard,noauto

where the UUID is obtained through lsblk /dev/sda -o UUID or a similar command. The archive_crypt is the name for the mapped device. It will appear as /dev/mapper/archive_crypt when the device is mapped. The none parameter specifies that no keyfile is used and the system should prompt for an encryption passphrase instead. The noauto, means not to attempt to load the device automatically upon boot. discard should be used if the underlying device is an SSD.

You can test everything works so far by opening and loading the LUKS device:

$ sudo cryptdisks_start archive_crypt

While the device is now encrypted, there is a possible leakage of metadata such as used blocks as an attacker can discern used vs unused blocks by examining the physical drive. This and other side-channel leaks can be mitigated by simply wiping the contents of the encrypted device.

$ openssl rand -hex 32 | openssl enc -chacha20 -in /dev/zero -pass stdin -nosalt | sudo dd if=/dev/stdin of=/dev/mapper/sda_crypt bs=4096 status=progress

We could also have used /dev/urandom but the above technique is much faster.

Now we can create the actual filesystem.

$ sudo mkfs.btrfs --label archive /dev/mapper/archive_crypt

At this point we’re actually pretty much done. You can add and entry to /etc/fstab to easily mount the filesystem and you’re done.

/dev/mapper/archive_crypt /home/guyru/archive btrfs noauto,user 0 0

Compiling lensfun-0.3.95 on Debian Buster

Lensfun provides lens distoration correction for Darktable and other raw processing applications. Version 0.3.95 provides ability to use the Adobe Camera Model, and hence use Adobe lens profiles (lcp files). However, lensfun 0.3.95 is not packaged for Debian Buster. Also Darktable won’t compile against the latest git version of Lensfun, so you must compile and install specifically version 0.3.95 to get ACM support.

We begin by downloading and extracting Lensfun 0.3.95. Lensfun 0.3.95 is not tagged in git, so we have to download the release directly from SourceForge. The release is not available from the GitHub repository.

$ wget https://sourceforge.net/projects/lensfun/files/0.3.95/lensfun-0.3.95.tar.gz
$ tar -xvf lensfun-0.3.95.tar.gz
$ cd lensfun-0.3.95/

Lensfun uses CMake for building and has also has CPack enabled. We can use it to build a deb package and install it. This allows easier integration and uninstallation in the future.

$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=release -DCPACK_BINARY_DEB=ON ../
$ make -j`nproc` && make package
$ sudo apt install ./liblensfun2_0.3.95.0_amd64.deb

Getting Radeon RX 550 to work under Debian Stretch

There are three things that need to be updated in Debian Stretch in order to get the Radeon RX 550 running properly (or at all): kernel, mesa and proprietary binary firmware (bummer, I know).

First thing, make sure you have stretch-backports in your apt-sources with all the relevant components.

$ deb http://ftp.debian.org/debian stretch-backports main contrib non-free

Now, the kernel that currently comes with stretch (4.9.0-8) is missing some important configurations: CONFIG_DRM_AMDGPU_SI, and CONFIG_DRM_AMDGPU_CIK. So you will need to install the latest one from the backports which does have the correct configuration.

$ sudo apt install -t stretch-backports linux-image-amd64

Next thing is getting the proper firmware

$ sudo apt install -t stretch-backports firmware-linux-nonfree

This will also update the firmware-amd-graphics which provides the binary blobs that are needed by the amdgpu driver to work properly. The old version does not support the new Polaris 12 architecture used by the RX 550, while the version from the backports (20180825) does support Polaris 12.

Now comes the part of upgrading mesa. There are a bunch binary packages that are derived from the mesa source package and we need to upgrade each one of them to version 18 (or later, but 18 is what is provided by the backports). The following two commands will upgrade any mesa related package already installed and then re-mark them as automatically installed (just to keep things tidy as they were).

sudo apt install -t stretch-backports $(grep-status -S mesa -a -FStatus "install ok installed" -s Package -n | sort -u)
sudo apt-mark auto $(grep-status -S mesa -a -FStatus "install ok installed" -s Package -n | sort -u)

(credit for the last two lines). Now you can restart your computer and the RX 550 should work. You can test it using

$ DRI_PRIME=1 glxinfo | grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Radeon 500 Series (POLARIS12, DRM 3.26.0, 4.18.0-0.bpo.1-amd64, LLVM 6.0.0)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.1.6

The DRI_PRIME=1 is necessary, else glxinfo would use the integrated card.

This is not necessary, but if lspcidoes not properly display the RX 550, you will need to update the PCI IDs that are used to translate IDs to actual human-readable names.

$ sudo update-pciids

Final word, if you are using TLP for power management, it may not play nice with the RX 550. With TLP enabled I get pretty horrible performance out of it (regardless of being on AC or battery).

Google Adsense for Wordpres – No Plugin Needed

Adding Google Adsense ads to your your WordPress blog was a tedious task. Either you needed to manually modify your theme, or you had to use a plugin, such as Google’s own Adsense plugin. Even then, placements were limited and handling both mobile and desktop themes was complicated at best. Recently, two things have changed: Google retired the Adsense plugin and introduced Auto Ads.

At first, the situation seemed like it turned for the worse. Without the official plugin, you had to resort into using a third-party plugin or manually placing ads in your theme. But Auto ads made things much simpler. Instead of having to manually place your ads, you can let Google do it for you. It works great on both desktop and mobile theme.

The easiest way to enable Auto ads is using a child theme. First, you need to get the Auto ads ad code. Next, in your child theme’s functions.php add the following lines, making sure to replace the javascript snippet with your own one.

// Add Google Adsense
function my_google_adsense_header() {
?>
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({
          google_ad_client: "ca-pub-4066984350135216",
          enable_page_level_ads: true
     });
</script>
<?php
}
add_action( 'wp_head', 'my_google_adsense_header');

Google Analytics for WordPress

To set up Google Analytics tracking for WordPress you don’t need any third-party plugin. It can be easily done using a child theme. A child theme, is a code that modifies the current theme in a way that won’t interfere with future upgrades. To enable Google Analytics, start by creating a child theme using the official documentation.

Now, you need to get you Google Analytics tracking code. Over the years the tracking code had a few different versions. You should make sure you are getting the latest tracking code, which currently looks like:

<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-380837-9"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'UA-380837-9');
</script>

To get the tracking code, follow the instructions on this page.

Now we add the tracking code to each page using the child theme. In your child theme’s directory, edit the functions.php file and add the following lines. Replace the tracking code with the one you acquired.

// Add Google Analytics tracking
function my_google_analytics_header() {
?>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-380837-9"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'UA-380837-9');
</script>
<?php
}
add_action( 'wp_head', 'my_google_analytics_header');

This adds the tracking code to the <head> of every page.

Creating Local Backups using `rdiff-backup`

rdiff-backup provides an easy way to maintain reverse-incremental backups of your data. Reverse incremental backups are different from normal incremental backups by synthetically updating the full backup and keeping reverse diffs of all the files changed. It is best illustrated by an example. Let’s consider backups taken on three consecutive days:
1. Full backup (1st day).
2. Full backup (2nd day), reverse-diff: 2nd -> 1st.
3. Full backup (3rd day), reverse diffs: 3rd -> 2nd, 2nd -> 1st.

Compare that with the regular incremental backup model which would be:
1. Full backup (1st day).
2. Diff: 2nd -> 1st, full backup (1st day).
3. Diffs: 3rd -> 2nd, 2nd -> 1st, full backup (1st day).

This especially makes purging old backups easier. Reverse incremental backups allows you to simply purge the reverse-diffs as they expire. This happens because newer backups never depend on older ones. In contrast, in the regular incremental model, each incremental backup depends on each prior backup in the chain, going back to the full backups. Thus, you can’t remove the full backup until all the incremental backups that depend on it expire as well. This means that most of the time you need to keep more than one full backups, which takes up precious disk space.

rdiff-backup has some disadvantages as well:
1. Backups are not encrypted, making it unsuitable as-is for remote backups.
2. Only the reverse-diffs are compressed.

The advantages of rdiff-bakcup make it suitable to create local Time Machine-like backups.

The following script, set via cron to run daily, can be used to take backups of your home directory:

#! /bin/sh

SOURCE="/home/user/"
TARGET="/home/user/backups/rdiff-home/"

## Backup
rdiff-backup --exclude-if-present .nobackup --exclude-globbing-filelist /home/user/backups/home-exclude --print-statistics $SOURCE $TARGET

## Remove old data
rdiff-backup --remove-older-than 1M --force --print-statistics $TARGET

where `/home/user/backups/home-exclude should look like:

+ /home/user/Desktop
+ /home/user/Documents
+ /home/user/Music
+ /home/user/Pictures
+ /home/user/Videos
+ /home/user/.vim
+ /home/user/.vimrc
+ /home/user/.ssh
+ /home/user/.gnupg

**

In order to select only certain files and directories to backup.

The --exclude-if-present .nobackup allows you to easily add a .nobackup file to directories you wish to ignore. The --force argument when purging the old backups allows it to remove more than one expired backup in a single run.

Listing backup chains:

$ rdiff-backup -l ~/backups/rdiff-home/

Restoring files from the most recent backup is simple. Because rdiff-backup keeps the latest backup as a normal mirror on the disk, you can simply copy the file you need out of the backup directory. To restore older files:

$ rdiff-backup --restore-as-of 10D ~/backups/rdiff-home/.vimrc restored_vimrc

Installing Firefox Quantum on Debian Stretch

Debian only provides the ESR (Extended Support Release) line of Firefox. As a result, currently, the latest version of Firefox available for Debian Stretch is Firefox 52, which is pretty old. Lately, Firefox 57, also known as Quantum, was released as Beta. It provides many improvements over older Firefox releases, including both security and performance.

Begin by downloading the latest beta (for Firefox 57) and extract it to your home directory:


$ wget -O firefox-beta.tar.bz2 "https://download.mozilla.org/?product=firefox-beta-latest&os=linux64&lang=en-US"
$ tar -C ~/.local/ -xvf firefox-beta.tar.bz2

This installs Firefox to your current user. Because Firefox is installed in a user-specific location (and without root-priveleges), Firefox will also auto-update when new versions are released.

If you prefer using the stable version of firefox, simply replace the first step by


$ wget -O firefox-stable.tar.bz2 "https://download.mozilla.org/?product=firefox-latest&os=linux64&lang=en-US"

Next, we take care of desktop integration. Put the following in ~/.local/share/applications/firefox-beta.desktop:


[Desktop Entry]
Type=Application
Name=Firefox Beta
Exec=/home/guyru/.local/firefox/firefox %u
X-MultipleArgs=false
Icon=firefox-esr
Categories=Network;WebBrowser;
Terminal=false
MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;

Patching an Existing Debian Package

This tutorial walks you through patching an existing Debian package. It is useful if you want to create a .deb of a package after fixing some bug or modifying the source in any other way. We will hugin as our example.

We start by fetching the source package

$ apt-get source hugin
$ cd hugin-2017.0.0+dfsg

We will need a tool named quilt to make the process easier.

# apt install quilt

Before using quilt we want to make it aware of the debian/patches directory which holds the patches. Adding the following lines to ~/.quiltrc will make quilt search up the directory tree the debian/patches directory.

d=. ; while [ ! -d $d/debian -a `readlink -e $d` != / ]; do d=$d/..; done
if [ -d $d/debian ] && [ -z $QUILT_PATCHES ]; then
        # if in Debian packaging tree with unset $QUILT_PATCHES
        QUILT_PATCHES="debian/patches"
        QUILT_PATCH_OPTS="--reject-format=unified"
        QUILT_DIFF_ARGS="-p ab --no-timestamps --no-index --color=auto"
        QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
        QUILT_COLORS="diff_hdr=1;32:diff_add=1;34:diff_rem=1;31:diff_hunk=1;33:diff_ctx=35:diff_cctx=33"
        if ! [ -d $d/debian/patches ]; then mkdir $d/debian/patches; fi
fi

Now starts the actual patching process. The patches are applies in series, and our new patch should be the last one. We start by applying any existing patches, and then creating a new patch

$ quilt push -a
$ quilt new 44_setlocale.patch

I chose the 44_ prefix because hugin already has a patch named 43_fallbackhelp.patch and the convention is naming patches so the names reflect the order they are applied. Next we specify to quilt which files we modify and then we edit them.

$ quilt add src/hugin1/hugin/huginApp.cpp
$ vim src/hugin1/hugin/huginApp.cpp

Alternatively, instead of editing the files manually, quilt import can be used to import an existing patch.

Each patch comes with its own metadata to let other people know who wrote it and what it does. Use

$ quilt header --dep3 -e

to edit this metadata. For example:

Description: Call setlocale()
This fixes a bug in wxExecute, see https://trac.wxwidgets.org/ticket/16206
The patch has been submitted to upstream, https://groups.google.com/d/msg/hugin-ptx/FCi7ykPDZ5E/3w8E5U1SCQAJ
Author: Guy Rutenberg <guyrutenberg@gmail.com>
Last-Update: 2017-10-04
---
This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
See [DEP-3](http://dep.debian.net/deps/dep3/) for more details about the different fields.

After we finish editing we finalize the patch, and unapply all the patches

$ quilt refresh
$ quilt pop -a

Now, you can continue to build the deb from source as usual. We use debchange to create a new version, and debuild to build the package. After the package is build it can be installed using debi

$ DEBEMAIL="Guy Rutenberg <guyrutenberg@gmail.com>" debchange --nmu
$ debuild -us -uc -i -I -j7

Sources:

Use `mk-build-deps` instead of `apt-get build-dep`

If you are building a package from source on Debian or Ubuntu, usually the first step is to install the build-dependencies of the package. This is usually done with one of two commands:

$ sudo apt-get build-dep PKGNAME

or

$ sudo aptitude build-dep PKGNAME

The problem is that there is no easy way to undo or revert the installation of the build dependencies. All the installed packages are marked as manually installed, so later one cannot simply expect to “autoremove” those packages. Webupd8 suggests clever one-liner that tries to parse the build dependencies out of apt-cache and mark them as automatically installed. However, as mentioned in the comments, it may be too liberal in marking packages as automatically installed, and hence remove too many packages.

The real solution is mk-build-deps. First you have to install it:

$ sudo apt install devscripts

Now, instead of using apt-get or aptitude directly to install the build-dependencies, use mk-build-deps.

$ mk-build-deps PKGNAME --install --root-cmd sudo --remove

mk-build-deps will create a new package, called PKGNAME-build-deps, which depends on all the build-dependencies of PKGNAME and then install it, therefore pulling all the build-dependencies and installing them as well. As those packages are now installed as dependencies they are marked as automatically installed. Once, you no longer need the build-dependencies, you can remove the package PKGNAME-build-deps, and apt will autoremove all the build-dependencies which are no longer necessary.

Let’s Encrypt: Reload Nginx after Renewing Certificates

Today I had an incident which caused my webserver to serve expired certificates. My blog relies on Let’s Encrypt for SSL/TLS certificates, which have to be renewed every 3 months. Usually, the cronjob which runs certbot --renew takes care of it automatically. However, there is one step missing, the server must reload the renewed certificates. Most of the time, the server gets reloaded often enough so everything is okay, but today, its been a quite a while since the last time since the nginx server was restarted, so expired certificates were served and the blog became unavailable.

To workaround it, we can make sure nginx reloads it configuration after each successful certificate renewal. The automatic renewal is defined in /etc/cron.d/certbot. The default contents under Debian Jessie are as follows:

# /etc/cron.d/certbot: crontab entries for the certbot package
#
# Upstream recommends attempting renewal twice a day
#
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

0 */12 * * * root test -x /usr/bin/certbot && perl -e 'sleep int(rand(3600))' && certbot -q renew

The last line makes sure certificate renewal runs twice a day. Append --renew-hook "/etc/init.d/nginx reload" to it, so it looks like this:

0 */12 * * * root test -x /usr/bin/certbot && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "/etc/init.d/nginx reload"

The --renew-hook runs the next argument after each successful certificate renewal. In our case we use it to reload the nginx configuration, which also reloads the newly renewed certificates.

Update 2019-02-27:

renew-hook has been deprecated in recent versions of certbot. Plus, debian moved from using cronjobs for automatic renewals to systemd timer if they are available. On the other hand, now certbot supports having hooks in configuration files. So, instead of what is described above, i would suggest creating a file /etc/letsencrypt/renewal-hooks/deploy/01-reload-nginx with the following content:

#! /bin/sh
set -e

/etc/init.d/nginx configtest
/etc/init.d/nginx reload

don’t forget to make the file executable.