Kindle Paperwhite “Unable to Open Item”

Recently, I tried transferring some new ebooks to my Kindle Paperwhite (first generation). The books were listed properly. However, when I tried to open them, I got the “Unable to Open Item” error, suggesting that I re-download the books from Amazon. I tried transferring the files again and again, but it didn’t help. Some of the books were mobi files, while others were AZW (which I got from אינדיבוק), and all of them opened fine on my computer.

Finally, I followed advice from a comment on the KindledFans blog and converted the files to AZW3 (the original comment suggested mobi, but AZW3 works better with Hebrew). After converting, I moved the files to my Kindle, and they opened just fine.

Enabling Compose Key in GNOME 3.4

For some reason, I couldn’t easily find how to enable the Compose key in GNOME 3.4. None of the references I found matched the actual menus and dialogs that I saw on my system, including the official GNOME help pages. So I decided to document it here for my future reference.

  1. Go to System Settings->Keyboard Layout.
  2. Select the Layouts tab and click Options.
  3. Under Compose key position, select the key you want to use as the Compose key.

Wikipedia has a nice table summarizing the Compose key sequences.

gettext with Autotools Tutorial

In this tutorial, we walk through the steps needed to add localizations to an existing project that uses GNU Autotools as its build system.

We start by taking a slightly modified version of the Hello World example that comes with the Automake sources. You can keep track of the changes to the source throughout this tutorial by following the commits to amhello-gettext on GitHub. We start with the following files:

$ ls -RF
.:
configure.ac  Makefile.am  README  src/

./src:
main.c  Makefile.am

Running gettextize

The first step is copying some necessary gettext infrastructure into your project. This is done by running gettextize in the root directory of your project. The command will create a bunch of new files and modify some existing files. Most of these files are auto-generated, so there is no need to add them to your version control. You should only add those files you create or modify manually.

You will need to add the following lines to your configure.ac.

AM_GNU_GETTEXT([external])
AM_GNU_GETTEXT_VERSION(0.18)

The version specified is the minimum required version of gettext your package can compile against.

Copy po/Makevars.template to po/Makevars and modify it as needed.

The next step is to copy over gettext.h to your sources.

$ cp /usr/share/gettext/gettext.h src/

libintl.h is the header that provides the different translation functions. gettext.h is a convenience wrapper around it which allows disabling gettext if --disable-nls is passed to the ./configure script. It is recommended to use gettext.h in favor of libintl.h.

Triggering gettext in main()

In order for gettext to work, you need to trigger it in your main(). This is done by adding the following lines to the main() function:

setlocale (LC_ALL, "");
bindtextdomain (PACKAGE, LOCALEDIR);
textdomain (PACKAGE);

You should also add #include "gettext.h" to the list of includes.

PACKAGE should be the name of your program, and is usually defined in the config.h file generated by either autoconf or autoheader. To define LOCALEDIR, we need to add the following line to src/Makefile.am:

AM_CPPFLAGS = -DLOCALEDIR='"$(localedir)"'

If AM_CPPFLAGS is already defined, just append the -DLOCALEDIR='"$(localedir)"' part to it.

Marking strings for translation

At this point, your program should compile with gettext. But since we did not translate anything yet, it will not do anything useful. Before translating, we need to mark the translatable strings in the sources. Wrap each translatable string in _(...), and add the following lines to each file that contains translatable strings:

#include "gettext.h"
#define _(String) gettext (String)

Extracting strings for translation

Before extracting the strings, we need to tell gettext where to look. This is done by listing each source file with translatable strings in po/POTFILES.in. So in our example, po/POTFILES.in should look like:

# List of source files which contain translatable strings.
src/main.c

Afterward, the following command can be used to actually extract the strings to po/amhello.pot (which should go in version control):

make -C po/ update-po

If you haven’t run ./configure yet, you need to run autoreconf --install && ./configure before running the above make command.

Translating strings

To begin translating, you need a *.po file for your language. This is done using msginit:

cd po/ && msginit --locale he_IL.utf8

The locale should be specified as a two-letter language code followed by a two-letter country code. In my example, I’ve used Hebrew; hence, it will create a po/he.po file. To translate the program, you edit the .po file, using either a text editor or a dedicated program (see the list of editors here).

After you update the .po file for your language, list the language in po/LINGUAS (you need to create it). For example, in my case:

# Set of available languages
he

Now you should be ready to compile and test the translation. Unfortunately, gettext requires installing the program in order to properly load the message catalogs, so we need to call make install.

./configure --prefix /tmp/amhello
make
make install

Now, to check the translation, simply run /tmp/amhello/bin/hello (you might need to change LC_ALL or LANGUAGE, depending on your locale, to see the translation).

$ LANGUAGE=he /tmp/amhello/bin/hello 
שלום עולם!

Final note about bootstrapping: when people check out your code from version control, many autogenerated files will be missing. The simplest way to bootstrap the code into a state where you can simply call ./configure && make is by using autoreconf:

autoreconf --install

This will add any missing files and run all the Autotools utilities (aclocal, autoconf, automake, autoheader, etc.) in the right order. Additionally, it will call autopoint, which copies the necessary gettext files that were generated when you called gettextize earlier in the tutorial. If your project is using a ./autogen.sh script that calls the Autotools utilities manually, you should add a call to autopoint --force before the call to aclocal.

Finally, these are the files that end up under version control in our example:

$ ls -RF
.:
configure.ac  Makefile.am  po/  README  src/

./po:
amhello.pot  he.po  LINGUAS  Makevars  POTFILES.in

./src:
gettext.h  main.c  Makefile.am

References

Displaying Google AdSense in MediaWiki

This post shows how to insert code that displays ads in MediaWiki. The proposed methods use hooks instead of modifying the skin. This has two advantages:

  1. No need to modify each skin separately. This allows users to change skins, and ads will be presented to them in the same logical place.
  2. It makes upgrades simpler. Hooks reside in LocalSettings.php, which isn’t modified by MediaWiki version upgrades, unlike skins.

The examples below show how to insert ads into the header, footer, and sidebar of each page. I’ve used the Google AdSense ad-serving code, but it could be easily replaced by the ad-serving code of any other ad network.
Continue reading Displaying Google AdSense in MediaWiki

View Failed Login Attempts – lastb

The lastb command can be used to list failed login attempts. By default, it displays a nice table of all failed attempts, including the username, time, and host the attempt had originated from.

sudo lastb -w | cut -d " " -f 1 | sort | uniq | less

The -w tells lastb to display the full username. The cut, sort, and uniq commands turn the output of lastb into a sorted list that contains each username only once.

When I ran it recently on my server, I found some interesting results. Nobody tried in the last fortnight to log in with root, but they did try with r00t, root2, root3, roottest, rootuser, and a bunch of similar ones. There were a bunch of generic users such as admin, support, test, user, sales, and a surprising number of software-related ones: wordpress, wp, stunnel, mysql, moodle, mongodb, minecraft, etc.

Another useful command is

$ sudo lastb -f /var/log/btmp.1 -w -i | awk '{print $3}' | sort | uniq --count | sort -nr | less

which lists hosts sorted by the number of failed attempts originating from each host.

Overall, in the last two weeks my server experienced more than 3300 failed login attempts using more than 800 unique usernames. Fortunately, as my server only allows public-key authentication via ssh, all those attempts are pretty futile.

Introducing mdview – a lightweight Markdown viewer

My favorite editor is vim, but it has downsides as well. Vim doesn’t have the GUI needed to extend it to preview things like Markdown properly. Yeah, sure, vim can highlight Markdown syntax, but that is not a replacement for real previewing. With that itch in mind, I searched for a solution but found none that satisfied me. For reStructuredText, I’ve found a solution that worked well. It worked by starting a local web server and doing the previewing in the browser. Inspired by it, I started writing mdview.

mdview allows you to instantly preview any Markdown file you’re editing in your favorite browser. It will automatically refresh when the file is changed, hence it’s great for working with the editor and browser side by side for live preview.
Continue reading Introducing mdview – a lightweight Markdown viewer

Outbrained – Greasemonkey script to remove tracking Outbrain links

Outbrain is a service that provides related content links to publishers. It is used by some news sites I frequent, and recently I’ve been annoyed by its tracking behavior. When you hover your cursor above the link, it seems like a regular, benign link, but once you click on the link, it changes to an evil tracking URL. To add to the annoyance, it is not always easy to distinguish Outbrain “ads” from legitimate links at first sight.

To end this annoyance for me, I’ve written a little Greasemonkey script. It is currently set up to work for Haaretz, Ynet, Calcalist, and TheMarker, but it should work fine for any site using Outbrain if enabled.

Download: outbrained.user.js

wxWidgets 2.8 to 3.0 Migration: Converting wxString to Numbers

wxWidgets provides a set of utility methods to convert wxString to various integer types, such as ToLong(). While the documentation for those functions remained roughly the same between wxWidgets 2.8 and 3.0, the implementation did change. In wxWidgets 2.8, if the string was empty, using any of the number conversion functions would result in the value 0. But in wxWidgets 3.0, it’s different, as can be learned from the following comment in wxstring.cpp:

// notice that we return false without modifying the output parameter at all if
// nothing could be parsed but we do modify it and return false then if we did
// parse something successfully but not the entire string

This means that if you relied on ToLong() to store 0 in the pointer to long when given an empty string, in wxWidgets 3.0 you will get an uninitialized value there.

I also noticed, when comparing the code of wxString in 2.8 and 3.0, that they implemented the integer conversion functions using C macros, while in 2.8 they used templates. I wonder why it was changed, as it looks more like a regression to me.

C++: mt19937 Example

C++11 introduces several pseudo-random number generators designed to replace the good old rand from the C standard library. I’ll show basic usage examples of std::mt19937, which provides random number generation based on the Mersenne Twister algorithm. Using the Mersenne Twister implementation that comes with C++11 has advantages over rand(), including:

  1. mt19937 has a much longer period than rand, e.g. it will take much longer for its random sequence to repeat itself.
  2. It has much better statistical behavior.
  3. Several different random number generator engines can be initialized simultaneously with different seeds, compared with the single “global” seed srand() provides.

The downside is that mt19937 is a bit less straightforward to use. However, I hope this post will help with that point :-).
Continue reading C++: mt19937 Example

Make Offline Mirror of a Site Using `wget`

Sometimes you want to create an offline copy of a site that you can take with you and view even without internet access. Using wget, you can make such a copy easily:

wget --mirror --convert-links --adjust-extension --page-requisites 
--no-parent http://example.org

Explanation of the various flags:

  • --mirror – Makes (among other things) the download recursive.
  • --convert-links – Converts all the links (also to things like CSS stylesheets) to relative links, so it will be suitable for offline viewing.
  • --adjust-extension – Adds suitable extensions to filenames (html or css) depending on their content type.
  • --page-requisites – Downloads things like CSS stylesheets and images required to properly display the page offline.
  • --no-parent – When recursing, do not ascend to the parent directory. It is useful for restricting the download to only a portion of the site.

Alternatively, the command above may be shortened:

wget -mkEpnp http://example.org

Note: The last p is part of np (--no-parent), and hence you see p twice in the flags.