Creating Local Backups using `rdiff-backup`

rdiff-backup provides an easy way to maintain reverse-incremental backups of your data. Reverse incremental backups are different from normal incremental backups by synthetically updating the full backup and keeping reverse diffs of all the files changed. It is best illustrated by an example. Let’s consider backups taken on three consecutive days:
1. Full backup (1st day).
2. Full backup (2nd day), reverse-diff: 2nd -> 1st.
3. Full backup (3rd day), reverse diffs: 3rd -> 2nd, 2nd -> 1st.

Compare that with the regular incremental backup model which would be:
1. Full backup (1st day).
2. Diff: 2nd -> 1st, full backup (1st day).
3. Diffs: 3rd -> 2nd, 2nd -> 1st, full backup (1st day).

This especially makes purging old backups easier. Reverse incremental backups allows you to simply purge the reverse-diffs as they expire. This happens because newer backups never depend on older ones. In contrast, in the regular incremental model, each incremental backup depends on each prior backup in the chain, going back to the full backups. Thus, you can’t remove the full backup until all the incremental backups that depend on it expire as well. This means that most of the time you need to keep more than one full backups, which takes up precious disk space.

rdiff-backup has some disadvantages as well:
1. Backups are not encrypted, making it unsuitable as-is for remote backups.
2. Only the reverse-diffs are compressed.

The advantages of rdiff-bakcup make it suitable to create local Time Machine-like backups.

The following script, set via cron to run daily, can be used to take backups of your home directory:

#! /bin/sh

SOURCE="/home/user/"
TARGET="/home/user/backups/rdiff-home/"

## Backup
rdiff-backup --exclude-if-present .nobackup --exclude-globbing-filelist /home/user/backups/home-exclude --print-statistics $SOURCE $TARGET

## Remove old data
rdiff-backup --remove-older-than 1M --force --print-statistics $TARGET

where `/home/user/backups/home-exclude should look like:

+ /home/user/Desktop
+ /home/user/Documents
+ /home/user/Music
+ /home/user/Pictures
+ /home/user/Videos
+ /home/user/.vim
+ /home/user/.vimrc
+ /home/user/.ssh
+ /home/user/.gnupg

**

In order to select only certain files and directories to backup.

The --exclude-if-present .nobackup allows you to easily add a .nobackup file to directories you wish to ignore. The --force argument when purging the old backups allows it to remove more than one expired backup in a single run.

Listing backup chains:

$ rdiff-backup -l ~/backups/rdiff-home/

Restoring files from the most recent backup is simple. Because rdiff-backup keeps the latest backup as a normal mirror on the disk, you can simply copy the file you need out of the backup directory. To restore older files:

$ rdiff-backup --restore-as-of 10D ~/backups/rdiff-home/.vimrc restored_vimrc

Installing Firefox Quantum on Debian Stretch

Debian only provides the ESR (Extended Support Release) line of Firefox. As a result, currently, the latest version of Firefox available for Debian Stretch is Firefox 52, which is pretty old. Lately, Firefox 57, also known as Quantum, was released as Beta. It provides many improvements over older Firefox releases, including both security and performance.

Begin by downloading the latest beta (for Firefox 57) and extract it to your home directory:


$ wget -O firefox-beta.tar.bz2 "https://download.mozilla.org/?product=firefox-beta-latest&os=linux64&lang=en-US"
$ tar -C ~/.local/ -xvf firefox-beta.tar.bz2

This installs Firefox to your current user. Because Firefox is installed in a user-specific location (and without root-priveleges), Firefox will also auto-update when new versions are released.

If you prefer using the stable version of firefox, simply replace the first step by


$ wget -O firefox-stable.tar.bz2 "https://download.mozilla.org/?product=firefox-latest&os=linux64&lang=en-US"

Next, we take care of desktop integration. Put the following in ~/.local/share/applications/firefox-beta.desktop:


[Desktop Entry]
Type=Application
Name=Firefox Beta
Exec=/home/guyru/.local/firefox/firefox %u
X-MultipleArgs=false
Icon=firefox-esr
Categories=Network;WebBrowser;
Terminal=false
MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;

Patching an Existing Debian Package

This tutorial walks you through patching an existing Debian package. It is useful if you want to create a .deb of a package after fixing some bug or modifying the source in any other way. We will hugin as our example.

We start by fetching the source package

$ apt-get source hugin
$ cd hugin-2017.0.0+dfsg

We will need a tool named quilt to make the process easier.

# apt install quilt

Before using quilt we want to make it aware of the debian/patches directory which holds the patches. Adding the following lines to ~/.quiltrc will make quilt search up the directory tree the debian/patches directory.

d=. ; while [ ! -d $d/debian -a `readlink -e $d` != / ]; do d=$d/..; done
if [ -d $d/debian ] && [ -z $QUILT_PATCHES ]; then
        # if in Debian packaging tree with unset $QUILT_PATCHES
        QUILT_PATCHES="debian/patches"
        QUILT_PATCH_OPTS="--reject-format=unified"
        QUILT_DIFF_ARGS="-p ab --no-timestamps --no-index --color=auto"
        QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
        QUILT_COLORS="diff_hdr=1;32:diff_add=1;34:diff_rem=1;31:diff_hunk=1;33:diff_ctx=35:diff_cctx=33"
        if ! [ -d $d/debian/patches ]; then mkdir $d/debian/patches; fi
fi

Now starts the actual patching process. The patches are applies in series, and our new patch should be the last one. We start by applying any existing patches, and then creating a new patch

$ quilt push -a
$ quilt new 44_setlocale.patch

I chose the 44_ prefix because hugin already has a patch named 43_fallbackhelp.patch and the convention is naming patches so the names reflect the order they are applied. Next we specify to quilt which files we modify and then we edit them.

$ quilt add src/hugin1/hugin/huginApp.cpp
$ vim src/hugin1/hugin/huginApp.cpp

Alternatively, instead of editing the files manually, quilt import can be used to import an existing patch.

Each patch comes with its own metadata to let other people know who wrote it and what it does. Use

$ quilt header --dep3 -e

to edit this metadata. For example:

Description: Call setlocale()
This fixes a bug in wxExecute, see https://trac.wxwidgets.org/ticket/16206
The patch has been submitted to upstream, https://groups.google.com/d/msg/hugin-ptx/FCi7ykPDZ5E/3w8E5U1SCQAJ
Author: Guy Rutenberg <guyrutenberg@gmail.com>
Last-Update: 2017-10-04
---
This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
See [DEP-3](http://dep.debian.net/deps/dep3/) for more details about the different fields.

After we finish editing we finalize the patch, and unapply all the patches

$ quilt refresh
$ quilt pop -a

Now, you can continue to build the deb from source as usual. We use debchange to create a new version, and debuild to build the package. After the package is build it can be installed using debi

$ DEBEMAIL="Guy Rutenberg <guyrutenberg@gmail.com>" debchange --nmu
$ debuild -us -uc -i -I -j7

Sources:

Use `mk-build-deps` instead of `apt-get build-dep`

If you are building a package from source on Debian or Ubuntu, usually the first step is to install the build-dependencies of the package. This is usually done with one of two commands:

$ sudo apt-get build-dep PKGNAME

or

$ sudo aptitude build-dep PKGNAME

The problem is that there is no easy way to undo or revert the installation of the build dependencies. All the installed packages are marked as manually installed, so later one cannot simply expect to “autoremove” those packages. Webupd8 suggests clever one-liner that tries to parse the build dependencies out of apt-cache and mark them as automatically installed. However, as mentioned in the comments, it may be too liberal in marking packages as automatically installed, and hence remove too many packages.

The real solution is mk-build-deps. First you have to install it:

$ sudo apt install devscripts

Now, instead of using apt-get or aptitude directly to install the build-dependencies, use mk-build-deps.

$ mk-build-deps PKGNAME --install --root-cmd sudo --remove

mk-build-deps will create a new package, called PKGNAME-build-deps, which depends on all the build-dependencies of PKGNAME and then install it, therefore pulling all the build-dependencies and installing them as well. As those packages are now installed as dependencies they are marked as automatically installed. Once, you no longer need the build-dependencies, you can remove the package PKGNAME-build-deps, and apt will autoremove all the build-dependencies which are no longer necessary.

Let’s Encrypt: Reload Nginx after Renewing Certificates

Today I had an incident which caused my webserver to serve expired certificates. My blog relies on Let’s Encrypt for SSL/TLS certificates, which have to be renewed every 3 months. Usually, the cronjob which runs certbot --renew takes care of it automatically. However, there is one step missing, the server must reload the renewed certificates. Most of the time, the server gets reloaded often enough so everything is okay, but today, its been a quite a while since the last time since the nginx server was restarted, so expired certificates were served and the blog became unavailable.

To workaround it, we can make sure nginx reloads it configuration after each successful certificate renewal. The automatic renewal is defined in /etc/cron.d/certbot. The default contents under Debian Jessie are as follows:

# /etc/cron.d/certbot: crontab entries for the certbot package
#
# Upstream recommends attempting renewal twice a day
#
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

0 */12 * * * root test -x /usr/bin/certbot && perl -e 'sleep int(rand(3600))' && certbot -q renew

The last line makes sure certificate renewal runs twice a day. Append --renew-hook "/etc/init.d/nginx reload" to it, so it looks like this:

0 */12 * * * root test -x /usr/bin/certbot && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "/etc/init.d/nginx reload"

The --renew-hook runs the next argument after each successful certificate renewal. In our case we use it to reload the nginx configuration, which also reloads the newly renewed certificates.

Update 2019-02-27:

renew-hook has been deprecated in recent versions of certbot. Plus, debian moved from using cronjobs for automatic renewals to systemd timer if they are available. On the other hand, now certbot supports having hooks in configuration files. So, instead of what is described above, i would suggest creating a file /etc/letsencrypt/renewal-hooks/deploy/01-reload-nginx with the following content:

#! /bin/sh
set -e

/etc/init.d/nginx configtest
/etc/init.d/nginx reload

don’t forget to make the file executable.

Installing OrderNet Pro on Debian Jessie

OrderNet is a a popular stock trading platform in Israel. OrderNet comes in two versions: The regular version is based on Silverlight and can be used on Linux using Pipelight. The Pro version is a desktop program written in .NET Framework and features better interface. This post walks through the steps needed to get OrderNet Pro running on Debian Jessie using Wine.

Continue reading Installing OrderNet Pro on Debian Jessie

Creating a personal apt repository using `dpkg-scanpackages`

From time to time I build and backport deb packages. Most of them are for my personal use, but sharing them would be nice. Another advantage for setting up a personal repository over directly installing deb files is that you can install dependencies from that repository automatically. Especially useful if one source package builds multiple binary packages which depend on one another.

There is a list of programs ways how to setup such personal repository in the Debian wiki. However, I found most ways to be too cumbersome for my limited requirements. The way I’m describing below is probably the simplest and easiest way to get up and running.

First thing is installing dpkg-dev which provides dpkg-scanpackages.

sudo apt install dpkg-dev

Next put the deb files you created in some local repository such as /usr/local/debian and cd into it.

# dpkg-scanpackages -m . | gzip -c > Packages.gz

will scan all the *.deb files in the directory and create an appropriate Packages.gz file. You need to repeat this step whenever you add new packages to /usr/local/debian.

Finally to enable the new local repository, add the following line to /etc/apt/sources.list:

deb [trusted=yes] file:///usr/local/debian ./

The [trusted=yes] options instruct apt to treat the packages as authenticated. Alternatively, if you want to share it with others, make sure that your webserver serves the directory and point to it

deb https://www.guyrutenberg.com/debian/jessie ./

(You will need the apt-transport-https in order to use https repositories).

Don’t forget to apt update before trying to install packages from the new repository.

Getting Started with Let’s Encrypt – Tutorial

A few days ago I got my invitation to Let’s Encrypt Beta Program. For those of you who are not familiar with Let’s encrypt:

Let’s Encrypt is a new free certificate authority, built on a foundation of cooperation and openness, that lets everyone be up and running with basic server certificates for their domains through a simple one-click process.

This short tutorial is intended to get you up and running with your own Let’s Encrypt signed certificates.

The first thing is to get the Let’s Encrypt client:

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt

The main command we will be working with is ./letsencrypt-auto. The first time you will run it, it will also ask for sudo, install various dependencies using your package manager and setup a virtualenv environment.

The next step is to issue the certificate and prove to Let’s Encrypt that you have some control over the domain. The client supports two methods to perform the validation. The first one is the standalone server. It works by setting up a webserver on port 443, and responding to a challenge from the Let’s Encrypt servers. However, if you already have your own web-server running on port 443 (the default for TLS/SSL), you would have to temporarily shut it down. To use the standalone method run:

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory certonly

The second method is called Webroot authentication. It works by placing a folder (.well-known/acme-challenge) in the document root of your server with files corresponding to responses for challenges.

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory -a webroot --webroot-path /var/www/html/ certonly

Whatever method you chose, it will ask for a list of domains you want to validate and your email address. You can write multiple domains. The first one will be the Common Name (CN) and the rest will appear in the Subject Alt Name field.

This slideshow requires JavaScript.

The newly generated certificates will be placed in

/etc/letsencrypt/live/yourdomain.com/

The important files in this directory are fullchain.pem which contain the full certificate chain to be served to the browser and privkey.pem which is the private key.

An example Nginx configuration will now look like:

        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/guyrutenberg.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/guyrutenberg.com/privkey.pem;

Just don’t forget to reload the web-server so configuration changes take effect. No more government snooping on my blog 😉 .

certificate

Securing Access using TLS/SSL Client Certificates

This tutorial will guide you in setting up authentication using TLS/SSL Client Certificates. It is a simple one as it would not delve into details about integration with server-side apps. Instead, it simply gives you instructions on how to set up Client Certificate as means to prevent unwanted parties from accessing your website.

For example, one such scenario where it may come useful, is limiting access to sensitive things on the server, like phpMyAdmin. phpMyAdmin for example handles sensitive data (such as database authentication) and does in plain HTTP, which may pose several security risks. Even if the data would be encrypted, someone with access to the application, might find vulnerabilities in it and exploit the relatively high-privileges it got to compromise the server. The solution to this, is also limit who has access to the application at all. A possible solution, which I’ve used, is to limit access to only to the local machine and using SSH or a VPN Tunnel to access phpMyAdmin. A better solution would be to use TLS/SSL Client Certificates. They operate on the connection level and provide both encryption and authentication. They are easier to setup than VPN tunnels and easier to use.

Note that limiting access based on TLS/SSL Client Certificate can only be done on the sub-domain level, because it happens as part of the connection, before any specific HTTP request can be made.

Most of the tutorial is not HTTP server-specific, however the server configuration part relates to Nginx. As other servers (such as Lighttpd) use a very similar configuration for Client Certificates, adapting the instruction should be straightforward.

Creating a CA

The two commands below will create CA private key and a corresponding self-signed certificate for you to sign the TLS client certificates with.

openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes-128-cbc -out ca.key
openssl req -new -x509 -days 365 -sha256 -key ca.key -out ca.pem

The first command will ask you for a pass phrase for the key. It is used to protect access to the private key. You can decide to not use one by dropping the -aes-128-cbc option from the command.

The second command will ask you to provide some details to be included in the certificate. Those details will be sent to the browser by the web-server to let it know which client certificate to send back when authenticating.

Server Configuration

Upload the ca.pem that was just generated to your server. You should not upload the private key (ca.key).

The following instructions are for Nginx

ssl_client_certificate /path/to/ca.pem;
ssl_verify_client on; # we require client certificates to access

Assuming you already enabled TLS/SSL for the specific sub-domain, your configuration should look something like this:

server {
        server_name subdomain.example.com;

        # SSL configuration
        #
        listen 443 ssl;
        listen [::]:443 ssl;

        ssl_certificate /etc/nginx/example.pem;
        ssl_certificate_key /etc/nginx/example.key;

        ssl_client_certificate /etc/ngingx/ca.pem;
        ssl_verify_client on;

After reloading the server, check that everything is configured correctly by trying to access your site via HTTPS. It should report “400 Bad Request” and say that “No required SSL certificate was sent”.

Creating a Client Certificate

The following commands will create the private key used for the client certificate (client.key) and a corresponding Certificate Signing Request (client.csr) which the owner of the CA certificate can sign (which in the case of this tutorial will be you.

openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out client.key
openssl req -new -key client.key -sha256 -out client.csr

You will be asked again to provide some details, this time about you. Those details will be available to server once your browser sends it the client certificate. You can safely leave the “challenge password” empty1.

You can add the flag -aes-128-cbc to the first command if you want the private key for the client certificate to be encrypted. If you opt for it, you will be prompted for a pass phrase just like before.

Signing a Client Certificate

The next step is to sign the certificate signing request from the last step. It is a good practice to overview it and make sure all the details are as expected, so you do not sign anything you would not intend to.

openssl req -in client.csr -text -verify -noout | less

If everything looks just fine, you can sign it with the following command.

openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca.key \
    -set_serial 0x`openssl rand 16 -hex` -sha256 -out client.pem

You will be prompted for your pass phrase for ca.key if you chose one in the first step.

Installing Client Key

Now comes the final part, where we take the signed client certificate, client.pem and combine it with the private key so in can be installed in our browser.

openssl pkcs12 -export -in client.pem -inkey client.key -name "Sub-domain certificate for some name" -out client.p12

Adjust the -name parameter to your liking. It will be used to identify the certificate in various places such as the browser. If your private key was encrypted, you will be prompted to enter a pass phrase for it. Encrytion for certificates in p12 format is mandatory, so you will be prompted to enter a password for the generated file as well. It is OK to reuse the same password here, as those files are practically equivalent. Once imported to you browser, you would not need the password for normal usage, until you would like to import it to another browser.

GlobalSign provides instruction on how to actually install the p12 client certificate for browsers in Linux and Windows.

References


  1. It is used to “enhance” the security of certificate revocation request, by requiring not only knowledge of the private key, but also the challenge password. Thus, someone who got hold of your private key cannot revoke the certificate by himself. However, this is also the reason why this option is not used more often: When someone steals your private key, usually they will prefer the certificate not to be revoked. 

Setting Up Lighttpd with PHP-FPM

PHP-FPM can provide an alternative to spawn-fcgi when setting up Lighttpd with PHP. It has several advantages over using spawn-fcgi among them:

  • It can dynamically scale and spawn new processes as needed.
  • Gracefully respawn PHP processes after configuration change.
  • Comes with init.d script so no need write your own.
  • Ability to log slow PHP script execution (similar to MySQL’s slow query log).

Installation

PHP-FPM is available from the Ubuntu (since 12.04) and Debian’s repositories, so all you need to do is:

$ sudo apt-get install php5-fpm

Configuration

PHP-FPM works with process pools. Each pool spawns processes independently and have different configurations. This can be useful to separate the PHP process of each user or major site on the server. PHP-FPM comes with a default pool configuration in /etc/php5/fpm/pool.d/www.conf. To create new pools, simply copy the default pool configuration and edit it. At least you will need to set the following:

  • Pool name – [www]. I name mine according to the user which the pool serves.
  • user – I set the user to the appropriate user, and leave group as www-data.
  • listen = /var/run/php.$pool.sock – Unix sockets have lower overhead than tcp sockets, so if both Lighttpd and PHP run on the same server they are preferable. $pool will be expanded to your pool name. Also, it is more secure to create the sockets in a directory not writable globally (such as /tmp/) so /var/run is a good choice.
  • listen.owner should match the PHP user, while listen.group should match the group Lighttpd runs in, so both have access to the socket.

If you copied www.conf to create new configuration, you will need to rename it to something like www.conf.default in order to disable it.

In the Lighttpd configuration you need to add the following to each vhost that uses PHP:

fastcgi.server    = ( ".php" => 
        ((
                "disable-time" => 0,
                "socket" => "/var/run/php.pool.sock",
        ))
)

Where pool in the socket configuration is replaced by the matching pool name in the PHP-FPM configuration. Overriding disable-time and setting it to 0, is suitable in the case you have only one PHP backend and it’s local. In this scenario, attempting to connect to the backend is cheap, and if it gets disabled no requests will get through any way.

Useful Files

  • /etc/php5/fpm/pool.d – The PHP-FPM pool configuration directory.
  • /var/log/php5-fpm.log – The PHP-FPM error log. It will display error and warning notifying you when pm.max_children has been reached, when processes die unexpectedly, etc.