Author Archives: Guy

Google Adsense for Wordpres – No Plugin Needed

Adding Google Adsense ads to your your WordPress blog was a tedious task. Either you needed to manually modify your theme, or you had to use a plugin, such as Google’s own Adsense plugin. Even then, placements were limited and handling both mobile and desktop themes was complicated at best. Recently, two things have changed: Google retired the Adsense plugin and introduced Auto Ads.

At first, the situation seemed like it turned for the worse. Without the official plugin, you had to resort into using a third-party plugin or manually placing ads in your theme. But Auto ads made things much simpler. Instead of having to manually place your ads, you can let Google do it for you. It works great on both desktop and mobile theme.

The easiest way to enable Auto ads is using a child theme. First, you need to get the Auto ads ad code. Nextg, in your child theme’s functions.php add the following lines, making sure to replace the javascript snippet with your own one.

// Add Google Adsense
function my_google_adsense_header() {
?>
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({
          google_ad_client: "ca-pub-4066984350135216",
          enable_page_level_ads: true
     });
</script>
<?php
}
add_action( 'wp_head', 'my_google_adsense_header');

Google Analytics for WordPress

To set up Google Analytics tracking for WordPress you don’t need any third-party plugin. It can be easily done using a child theme. A child theme, is a code that modifies the current theme in a way that won’t interfere with future upgrades. To enable Google Analytics, start by creating a child theme using the official documentation.

Now, you need to get you Google Analytics tracking code. Over the years the tracking code had a few different versions. You should make sure you are getting the latest tracking code, which currently looks like:

<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-380837-9"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'UA-380837-9');
</script>

To get the tracking code, follow the instructions on this page.

Now we add the tracking code to each page using the child theme. In your child theme’s directory, edit the functions.php file and add the following lines. Replace the tracking code with the one you acquired.

// Add Google Analytics tracking
function my_google_analytics_header() {
?>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-380837-9"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'UA-380837-9');
</script>
<?php
}
add_action( 'wp_head', 'my_google_analytics_header');

This adds the tracking code to the <head> of every page.

Creating Local Backups using `rdiff-backup`

rdiff-backup provides an easy way to maintain reverse-incremental backups of your data. Reverse incremental backups are different from normal incremental backups by synthetically updating the full backup and keeping reverse diffs of all the files changed. It is best illustrated by an example. Let’s consider backups taken on three consecutive days:
1. Full backup (1st day).
2. Full backup (2nd day), reverse-diff: 2nd -> 1st.
3. Full backup (3rd day), reverse diffs: 3rd -> 2nd, 2nd -> 1st.

Compare that with the regular incremental backup model which would be:
1. Full backup (1st day).
2. Diff: 2nd -> 1st, full backup (1st day).
3. Diffs: 3rd -> 2nd, 2nd -> 1st, full backup (1st day).

This especially makes purging old backups easier. Reverse incremental backups allows you to simply purge the reverse-diffs as they expire. This happens because newer backups never depend on older ones. In contrast, in the regular incremental model, each incremental backup depends on each prior backup in the chain, going back to the full backups. Thus, you can’t remove the full backup until all the incremental backups that depend on it expire as well. This means that most of the time you need to keep more than one full backups, which takes up precious disk space.

rdiff-backup has some disadvantages as well:
1. Backups are not encrypted, making it unsuitable as-is for remote backups.
2. Only the reverse-diffs are compressed.

The advantages of rdiff-bakcup make it suitable to create local Time Machine-like backups.

The following script, set via cron to run daily, can be used to take backups of your home directory:

#! /bin/sh

SOURCE="/home/user/"
TARGET="/home/user/backups/rdiff-home/"

## Backup
rdiff-backup --exclude-if-present .nobackup --exclude-globbing-filelist /home/user/backups/home-exclude --print-statistics $SOURCE $TARGET

## Remove old data
rdiff-backup --remove-older-than 1M --force --print-statistics $TARGET

where `/home/user/backups/home-exclude should look like:

+ /home/user/Desktop
+ /home/user/Documents
+ /home/user/Music
+ /home/user/Pictures
+ /home/user/Videos
+ /home/user/.vim
+ /home/user/.vimrc
+ /home/user/.ssh
+ /home/user/.gnupg

**

In order to select only certain files and directories to backup.

The --exclude-if-present .nobackup allows you to easily add a .nobackup file to directories you wish to ignore. The --force argument when purging the old backups allows it to remove more than one expired backup in a single run.

Listing backup chains:

$ rdiff-backup -l ~/backups/rdiff-home/

Restoring files from the most recent backup is simple. Because rdiff-backup keeps the latest backup as a normal mirror on the disk, you can simply copy the file you need out of the backup directory. To restore older files:

$ rdiff-backup --restore-as-of 10D ~/backups/rdiff-home/.vimrc restored_vimrc

Set default application using `xdg-mime`

You can use the xdg-mime utility to query the default mime-type associations and change them.

xdg-mime query default video/mp4

Will return the .desktop file associated with the default app to open mp4 files. To change the default association:

xdg-mime default vlc.desktop video/mp4

you need to specify the desktop file to open files of the specified mime type. To check the mime-type of a given file, use

file -ib filename

Installing Firefox Quantum on Debian Stretch

Debian only provides the ESR (Extended Support Release) line of Firefox. As a result, currently, the latest version of Firefox available for Debian Stretch is Firefox 52, which is pretty old. Lately, Firefox 57, also known as Quantum, was released as Beta. It provides many improvements over older Firefox releases, including both security and performance.

Begin by downloading the latest beta (for Firefox 57) and extract it to your home directory:


$ wget -O firefox-beta.tar.bz2 "https://download.mozilla.org/?product=firefox-beta-latest&os=linux64&lang=en-US"
$ tar -C ~/.local/ -xvf firefox-beta.tar.bz2

This installs Firefox to your current user. Because Firefox is installed in a user-specific location (and without root-priveleges), Firefox will also auto-update when new versions are released.

If you prefer using the stable version of firefox, simply replace the first step by


$ wget -O firefox-stable.tar.bz2 "https://download.mozilla.org/?product=firefox-latest&os=linux64&lang=en-US"

Next, we take care of desktop integration. Put the following in ~/.local/share/applications/firefox-beta.desktop:


[Desktop Entry]
Type=Application
Name=Firefox Beta
Exec=/home/guyru/.local/firefox/firefox %u
X-MultipleArgs=false
Icon=firefox-esr
Categories=Network;WebBrowser;
Terminal=false
MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;

Duply credential error when using Amazon S3

Duply is a convenient wrapper around duplicity, a tool for encrypted incremental backups I’ve used for the last couple of years. Recently, after a recent upgrade, my Amazon S3 backups failed, reporting the following error:

    'Check your credentials' % (len(names), str(names)))
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials

Boto, the backend duplicity relies on for the Amazon S3 backend, requires to pass authentication parameters through the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. As different backends require different variables, duply used to make that transparent, one would just set TARGET_USER and TARGET_PASS and duply would take care of the rest. However, duply 1.10 broke compatibility and requires you to set the variables yourself. Hence, the fix is to replace the TARGET_* variables with exported AWS_* variables:

# TARGET_USER='XXXXXXXXXXXX'
# TARGET_PASS='XXXXXXXXXXXX'
export AWS_ACCESS_KEY_ID='XXXXXXXXXXXX'
export AWS_SECRET_ACCESS_KEY='XXXXXXXXXXXX'

Patching an Existing Debian Package

This tutorial walks you through patching an existing Debian package. It is useful if you want to create a .deb of a package after fixing some bug or modifying the source in any other way. We will hugin as our example.

We start by fetching the source package

$ apt-get source hugin
$ cd hugin-2017.0.0+dfsg

We will need a tool named quilt to make the process easier.

# apt install quilt

Before using quilt we want to make it aware of the debian/patches directory which holds the patches. Adding the following lines to ~/.quiltrc will make quilt search up the directory tree the debian/patches directory.

d=. ; while [ ! -d $d/debian -a `readlink -e $d` != / ]; do d=$d/..; done
if [ -d $d/debian ] && [ -z $QUILT_PATCHES ]; then
        # if in Debian packaging tree with unset $QUILT_PATCHES
        QUILT_PATCHES="debian/patches"
        QUILT_PATCH_OPTS="--reject-format=unified"
        QUILT_DIFF_ARGS="-p ab --no-timestamps --no-index --color=auto"
        QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
        QUILT_COLORS="diff_hdr=1;32:diff_add=1;34:diff_rem=1;31:diff_hunk=1;33:diff_ctx=35:diff_cctx=33"
        if ! [ -d $d/debian/patches ]; then mkdir $d/debian/patches; fi
fi

Now starts the actual patching process. The patches are applies in series, and our new patch should be the last one. We start by applying any existing patches, and then creating a new patch

$ quilt push -a
$ quilt new 44_setlocale.patch

I chose the 44_ prefix because hugin already has a patch named 43_fallbackhelp.patch and the convention is naming patches so the names reflect the order they are applied. Next we specify to quilt which files we modify and then we edit them.

$ quilt add src/hugin1/hugin/huginApp.cpp
$ vim src/hugin1/hugin/huginApp.cpp

Alternatively, instead of editing the files manually, quilt import can be used to import an existing patch.

Each patch comes with its own metadata to let other people know who wrote it and what it does. Use

$ quilt header --dep3 -e

to edit this metadata. For example:

Description: Call setlocale()
This fixes a bug in wxExecute, see https://trac.wxwidgets.org/ticket/16206
The patch has been submitted to upstream, https://groups.google.com/d/msg/hugin-ptx/FCi7ykPDZ5E/3w8E5U1SCQAJ
Author: Guy Rutenberg <guyrutenberg@gmail.com>
Last-Update: 2017-10-04
---
This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
See [DEP-3](http://dep.debian.net/deps/dep3/) for more details about the different fields.

After we finish editing we finalize the patch, and unapply all the patches

$ quilt refresh
$ quilt pop -a

Now, you can continue to build the deb from source as usual. We use debchange to create a new version, and debuild to build the package. After the package is build it can be installed using debi

$ DEBEMAIL="Guy Rutenberg <guyrutenberg@gmail.com>" debchange --nmu
$ debuild -us -uc -i -I -j7

Sources:

Use `mk-build-deps` instead of `apt-get build-dep`

If you are building a package from source on Debian or Ubuntu, usually the first step is to install the build-dependencies of the package. This is usually done with one of two commands:

$ sudo apt-get build-dep PKGNAME

or

$ sudo aptitude build-dep PKGNAME

The problem is that there is no easy way to undo or revert the installation of the build dependencies. All the installed packages are marked as manually installed, so later one cannot simply expect to “autoremove” those packages. Webupd8 suggests clever one-liner that tries to parse the build dependencies out of apt-cache and mark them as automatically installed. However, as mentioned in the comments, it may be too liberal in marking packages as automatically installed, and hence remove too many packages.

The real solution is mk-build-deps. First you have to install it:

$ sudo apt install devscripts

Now, instead of using apt-get or aptitude directly to install the build-dependencies, use mk-build-deps.

$ mk-build-deps PKGNAME --install --root-cmd sudo --remove

mk-build-deps will create a new package, called PKGNAME-build-deps, which depends on all the build-dependencies of PKGNAME and then install it, therefore pulling all the build-dependencies and installing them as well. As those packages are now installed as dependencies they are marked as automatically installed. Once, you no longer need the build-dependencies, you can remove the package PKGNAME-build-deps, and apt will autoremove all the build-dependencies which are no longer necessary.

Calculate Google Drive Folder Size Using `rclone`

Google Drive lacks a very basic feature: calculating folder size. There is no solution in the web interface to view the total size of a given directory. There are a couple of dubious looking online “folder size analyzers” request access permissions to your Google Drive and offer you the basic functionality of calculating folder size. While those apps, have a legitimate need for access permission to your account, you may consider given those permissions to random apps a questionable decision.

The (very) useful tool rclone, which provides rsync like interface to many cloud storage providers, implements this functionality. After configuring rclone to work with your Google Drive, use the size command to determine the size of a given folder:

$ rclone size "gdrive:Pictures/"
Total objects: 1421
Total size: 9.780 GBytes (10501374440 Bytes)

The size is calculated recursively. However, there is no simple way to display the size of each sub-directory recursively.

One Line Implementations of GCD and Extended GCD in Python

Sometimes you need quick and dirty implementation of greatest common divisor and its extended variant. Actually, because the fractions module comes gcd function, probably the implementation of the exteded greatest common divisor algorithm is more useful. Both implementations are recursive, but work nonetheless even for comparatively large integers.

# simply returns the gcd
gcd = lambda a,b: gcd(b, a % b) if b else a

# egcd(a,b) returns (d,x,y) where d == a*x + b*y
egcd = lambda a,b: (lambda d,x,y: (d, y, x - (a // b) * y))(*egcd(b, a % b)) if b else (a, 1, 0)

The code above is Python 2 and 3 compatible.