Installing OrderNet Pro on Debian Jessie

OrderNet is a a popular stock trading platform in Israel. OrderNet comes in two versions: The regular version is based on Silverlight and can be used on Linux using Pipelight. The Pro version is a desktop program written in .NET Framework and features better interface. This post walks through the steps needed to get OrderNet Pro running on Debian Jessie using Wine.

Continue reading Installing OrderNet Pro on Debian Jessie

Creating a personal apt repository using `dpkg-scanpackages`

From time to time I build and backport deb packages. Most of them are for my personal use, but sharing them would be nice. Another advantage for setting up a personal repository over directly installing deb files is that you can install dependencies from that repository automatically. Especially useful if one source package builds multiple binary packages which depend on one another.

There is a list of programs ways how to setup such personal repository in the Debian wiki. However, I found most ways to be too cumbersome for my limited requirements. The way I’m describing below is probably the simplest and easiest way to get up and running.

First thing is installing dpkg-dev which provides dpkg-scanpackages.

sudo apt install dpkg-dev

Next put the deb files you created in some local repository such as /usr/local/debian and cd into it.

# dpkg-scanpackages -m . | gzip -c > Packages.gz

will scan all the *.deb files in the directory and create an appropriate Packages.gz file. You need to repeat this step whenever you add new packages to /usr/local/debian.

Finally to enable the new local repository, add the following line to /etc/apt/sources.list:

deb [trusted=yes] file:///usr/local/debian ./

The [trusted=yes] options instruct apt to treat the packages as authenticated. Alternatively, if you want to share it with others, make sure that your webserver serves the directory and point to it

deb https://www.guyrutenberg.com/debian/jessie ./

(You will need the apt-transport-https in order to use https repositories).

Don’t forget to apt update before trying to install packages from the new repository.

Getting Started with Let’s Encrypt – Tutorial

A few days ago I got my invitation to Let’s Encrypt Beta Program. For those of you who are not familiar with Let’s encrypt:

Let’s Encrypt is a new free certificate authority, built on a foundation of cooperation and openness, that lets everyone be up and running with basic server certificates for their domains through a simple one-click process.

This short tutorial is intended to get you up and running with your own Let’s Encrypt signed certificates.

The first thing is to get the Let’s Encrypt client:

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt

The main command we will be working with is ./letsencrypt-auto. The first time you will run it, it will also ask for sudo, install various dependencies using your package manager and setup a virtualenv environment.

The next step is to issue the certificate and prove to Let’s Encrypt that you have some control over the domain. The client supports two methods to perform the validation. The first one is the standalone server. It works by setting up a webserver on port 443, and responding to a challenge from the Let’s Encrypt servers. However, if you already have your own web-server running on port 443 (the default for TLS/SSL), you would have to temporarily shut it down. To use the standalone method run:

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory certonly

The second method is called Webroot authentication. It works by placing a folder (.well-known/acme-challenge) in the document root of your server with files corresponding to responses for challenges.

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory -a webroot --webroot-path /var/www/html/ certonly

Whatever method you chose, it will ask for a list of domains you want to validate and your email address. You can write multiple domains. The first one will be the Common Name (CN) and the rest will appear in the Subject Alt Name field.

This slideshow requires JavaScript.

The newly generated certificates will be placed in

/etc/letsencrypt/live/yourdomain.com/

The important files in this directory are fullchain.pem which contain the full certificate chain to be served to the browser and privkey.pem which is the private key.

An example Nginx configuration will now look like:

        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/guyrutenberg.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/guyrutenberg.com/privkey.pem;

Just don’t forget to reload the web-server so configuration changes take effect. No more government snooping on my blog 😉 .

certificate

Securing Access using TLS/SSL Client Certificates

This tutorial will guide you in setting up authentication using TLS/SSL Client Certificates. It is a simple one as it would not delve into details about integration with server-side apps. Instead, it simply gives you instructions on how to set up Client Certificate as means to prevent unwanted parties from accessing your website.

For example, one such scenario where it may come useful, is limiting access to sensitive things on the server, like phpMyAdmin. phpMyAdmin for example handles sensitive data (such as database authentication) and does in plain HTTP, which may pose several security risks. Even if the data would be encrypted, someone with access to the application, might find vulnerabilities in it and exploit the relatively high-privileges it got to compromise the server. The solution to this, is also limit who has access to the application at all. A possible solution, which I’ve used, is to limit access to only to the local machine and using SSH or a VPN Tunnel to access phpMyAdmin. A better solution would be to use TLS/SSL Client Certificates. They operate on the connection level and provide both encryption and authentication. They are easier to setup than VPN tunnels and easier to use.

Note that limiting access based on TLS/SSL Client Certificate can only be done on the sub-domain level, because it happens as part of the connection, before any specific HTTP request can be made.

Most of the tutorial is not HTTP server-specific, however the server configuration part relates to Nginx. As other servers (such as Lighttpd) use a very similar configuration for Client Certificates, adapting the instruction should be straightforward.

Creating a CA

The two commands below will create CA private key and a corresponding self-signed certificate for you to sign the TLS client certificates with.

openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes-128-cbc -out ca.key
openssl req -new -x509 -days 365 -sha256 -key ca.key -out ca.pem

The first command will ask you for a pass phrase for the key. It is used to protect access to the private key. You can decide to not use one by dropping the -aes-128-cbc option from the command.

The second command will ask you to provide some details to be included in the certificate. Those details will be sent to the browser by the web-server to let it know which client certificate to send back when authenticating.

Server Configuration

Upload the ca.pem that was just generated to your server. You should not upload the private key (ca.key).

The following instructions are for Nginx

ssl_client_certificate /path/to/ca.pem;
ssl_verify_client on; # we require client certificates to access

Assuming you already enabled TLS/SSL for the specific sub-domain, your configuration should look something like this:

server {
        server_name subdomain.example.com;

        # SSL configuration
        #
        listen 443 ssl;
        listen [::]:443 ssl;

        ssl_certificate /etc/nginx/example.pem;
        ssl_certificate_key /etc/nginx/example.key;

        ssl_client_certificate /etc/ngingx/ca.pem;
        ssl_verify_client on;

After reloading the server, check that everything is configured correctly by trying to access your site via HTTPS. It should report “400 Bad Request” and say that “No required SSL certificate was sent”.

Creating a Client Certificate

The following commands will create the private key used for the client certificate (client.key) and a corresponding Certificate Signing Request (client.csr) which the owner of the CA certificate can sign (which in the case of this tutorial will be you.

openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out client.key
openssl req -new -key client.key -sha256 -out client.csr

You will be asked again to provide some details, this time about you. Those details will be available to server once your browser sends it the client certificate. You can safely leave the “challenge password” empty1.

You can add the flag -aes-128-cbc to the first command if you want the private key for the client certificate to be encrypted. If you opt for it, you will be prompted for a pass phrase just like before.

Signing a Client Certificate

The next step is to sign the certificate signing request from the last step. It is a good practice to overview it and make sure all the details are as expected, so you do not sign anything you would not intend to.

openssl req -in client.csr -text -verify -noout | less

If everything looks just fine, you can sign it with the following command.

openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca.key \
    -set_serial 0x`openssl rand 16 -hex` -sha256 -out client.pem

You will be prompted for your pass phrase for ca.key if you chose one in the first step.

Installing Client Key

Now comes the final part, where we take the signed client certificate, client.pem and combine it with the private key so in can be installed in our browser.

openssl pkcs12 -export -in client.pem -inkey client.key -name "Sub-domain certificate for some name" -out client.p12

Adjust the -name parameter to your liking. It will be used to identify the certificate in various places such as the browser. If your private key was encrypted, you will be prompted to enter a pass phrase for it. Encrytion for certificates in p12 format is mandatory, so you will be prompted to enter a password for the generated file as well. It is OK to reuse the same password here, as those files are practically equivalent. Once imported to you browser, you would not need the password for normal usage, until you would like to import it to another browser.

GlobalSign provides instruction on how to actually install the p12 client certificate for browsers in Linux and Windows.

References


  1. It is used to “enhance” the security of certificate revocation request, by requiring not only knowledge of the private key, but also the challenge password. Thus, someone who got hold of your private key cannot revoke the certificate by himself. However, this is also the reason why this option is not used more often: When someone steals your private key, usually they will prefer the certificate not to be revoked. 

Setting Up Lighttpd with PHP-FPM

PHP-FPM can provide an alternative to spawn-fcgi when setting up Lighttpd with PHP. It has several advantages over using spawn-fcgi among them:

  • It can dynamically scale and spawn new processes as needed.
  • Gracefully respawn PHP processes after configuration change.
  • Comes with init.d script so no need write your own.
  • Ability to log slow PHP script execution (similar to MySQL’s slow query log).

Installation

PHP-FPM is available from the Ubuntu (since 12.04) and Debian’s repositories, so all you need to do is:

$ sudo apt-get install php5-fpm

Configuration

PHP-FPM works with process pools. Each pool spawns processes independently and have different configurations. This can be useful to separate the PHP process of each user or major site on the server. PHP-FPM comes with a default pool configuration in /etc/php5/fpm/pool.d/www.conf. To create new pools, simply copy the default pool configuration and edit it. At least you will need to set the following:

  • Pool name – [www]. I name mine according to the user which the pool serves.
  • user – I set the user to the appropriate user, and leave group as www-data.
  • listen = /var/run/php.$pool.sock – Unix sockets have lower overhead than tcp sockets, so if both Lighttpd and PHP run on the same server they are preferable. $pool will be expanded to your pool name. Also, it is more secure to create the sockets in a directory not writable globally (such as /tmp/) so /var/run is a good choice.
  • listen.owner should match the PHP user, while listen.group should match the group Lighttpd runs in, so both have access to the socket.

If you copied www.conf to create new configuration, you will need to rename it to something like www.conf.default in order to disable it.

In the Lighttpd configuration you need to add the following to each vhost that uses PHP:

fastcgi.server    = ( ".php" => 
        ((
                "disable-time" => 0,
                "socket" => "/var/run/php.pool.sock",
        ))
)

Where pool in the socket configuration is replaced by the matching pool name in the PHP-FPM configuration. Overriding disable-time and setting it to 0, is suitable in the case you have only one PHP backend and it’s local. In this scenario, attempting to connect to the backend is cheap, and if it gets disabled no requests will get through any way.

Useful Files

  • /etc/php5/fpm/pool.d – The PHP-FPM pool configuration directory.
  • /var/log/php5-fpm.log – The PHP-FPM error log. It will display error and warning notifying you when pm.max_children has been reached, when processes die unexpectedly, etc.

Creating a Hebrew Document in LyX 2.1 with XeTeX

This post complements the basic LaTeX template I gave yesterday for typesetting Hebrew with XeTeX. I’ll walk through the (short) list of steps needed to configure LyX with XeTeX.

Prerequisites

  • LyX 2.1 or later (I’ve also tested with the development version of 2.2). I had very limited success with LyX 2.0, so you should probably avoid it.
  • XeTeX – I’ve tested with version 3.1415926-2.4-0.9998 which comes with TeXLive 2012, but I guess any recent version will do.
  • The polyglossia and bidi packages. Again I’ve used those which come with TeXLive 2012.
  • Good TrueType Hebrew fonts. I recommend Culmus 0.121 or newer. You may also try and use the fonts that come with your operating system, they might work as well.

Setting up the document

Create a new document and open the settings dialog (Document -> Settings...).

  1. Pick a suitable Document class. I recommend “KOMA-Script Article” but “Article” works just as fine. Avoid “Hebrew Article”, as it is broken under XeTeX.
  2. Under Fonts check the box next to `Use non-TeX fonts (via XeTeX/LuaTeX) and select suitable fonts:
    • Roman: Frank Ruehl CLM. David CLM is also a good choice with somewhat better italics variant.
    • Sans Serif: Simple CLM.
    • Typewriter: Miriam Mono CLM.
    • There is no need to change the Math font.
  3. Under Language select Hebrew as the document’s language.

That’s basically it. You can now write your document and compile it. I would suggest saving these settings as default (via “Save as Document Defaults”) or saving it as a template so you won’t need to repeat those steps.

Writing in English

To insert English text in your Hebrew document, you need to change the current language. The easiest way to do so is to create a keyboard shortcut for it:

  1. Go to Tools -> Preferences -> Editing -> Shortcuts
  2. Write “language” under “Show key-bindings containing:”.
  3. Select “language” under “Cursor, Mouse and Editing Functions” and click “Modify” to set a keyboard shortcut (F12 is traditionally used for this).

Now you can toggle the current language between English and Hebrew by simply pressing F12.

Remark about Fonts

It is preferable to use fonts that provide both Hebrew and Latin scripts, as otherwise there might be significant style differences which make the document look weird. It is possible to set a different font for Hebrew and Latin, but care needs to be taken to match styles. To do so, add the following lines to the Preamble:

\newfontfamily\hebrewfont[Script=Hebrew]{David CLM}
\newfontfamily\hebrewfonttt[Script=Hebrew]{Miriam Mono CLM}
\newfontfamily\hebrewfontsf[Script=Hebrew]{Simple CLM}

gettext with Autotools Tutorial

In this tutorial we walk through the steps needed in order to add localizations to an existing project that uses GNU Autotools as build system.

We start by taking a a slightly modified version of the Hello World example that comes with Automake sources. You can keep track of the changes to the source throughout this tutorial by following the commits to amhello-gettext on GitHub. We start with the following files:

$ ls -RF
.:
configure.ac  Makefile.am  README  src/

./src:
main.c  Makefile.am

Running gettextize

The first step is copying some necessary gettext infrastructure to your project. This is done running gettexize in the root directory of your project. The command will create a bunch of new files and modify some existing files. Most of these files are auto-generated, so there is no need to add them to your version control. You should only add those files you create or modify manually.

You will need to add the following line to your configure.ac.

AM_GNU_GETTEXT([external])
AM_GNU_GETTEXT_VERSION(0.18)

The version specified is the minimum required version of gettext your package can compile against.

Copy po/Makevars.template to po/Makevars and modify it as needed.

The next step is to copy over gettext.h to your sources.

$ cp /usr/share/gettext/gettext.h src/

libintl.h, which is the header that provides the different translation functions. gettext.h is a convenience wrapper around it which allows disabling gettext if the --disable-nls is passed the ./configure script. It is recommended to use gettext.h in favor of libintl.h.

Triggering gettext in main()

In order for gettext to work, you need to trigger it in your main(). This is done by adding the following lines to the main() function:

setlocale (LC_ALL, "");
bindtextdomain (PACKAGE, LOCALEDIR);
textdomain (PACKAGE);

You should also add #include "gettext.h to the list of includes.

PACKAGE should be the name of your program, and is usually defined in config.h file generated by either autoconf or autoheader. To define LOCALEDIR we need to add the following line to src/Makefile.am:

AM_CPPFLAGS = -DLOCALEDIR='"$(localedir)"'

If AM_CPPFLAGS is already defined, just append to it the -DLOCALEDIR='"$(localedir)"' part.

Marking strings for translation

At this point, your program should compile with gettext. But since we did not translate anything yet it will not do anything useful. Before translating we need to mark the translatable strings in the sources. Wrap each translatable string in _(...), and add the following lines to each file that contains translatable strings:

#include "gettext.h"
#define _(String) gettext (String)

Extracting string for translation

Before extracting the strings, we need to tell gettext where to look. This is done by listing each source file with translatable strings in po/POTFILES.in. So in our example po/POTFILES.in should look like:

# List of source files which contain translatable strings.
src/main.c

Afterwards the following command can be used to actually extract the strings to po/amhello.pot (which should go in the version control):

make -C po/ update-po

If you haven’t ran ./configure yet you need to run autoreconf --install && ./configure before running the above make command.

Translating strings

To begin translating, you need to a *.po file for your language. This is done using msginit:

cd po/ && msginit --locale he_IL.utf8

The locale should be specified as two-letter language code followed by two-letter country code. In my example, I’ve used Hebrew, hence it will create a po/he.po file. To translate the program you edit the .po file, using either a text editor or a dedicated program (see list of editors here).

After you updated the .po file for your language, list the language in po/LINGUAS (you need to create it). For example, in my case:

# Set of available languages
he

Now you should be ready to compile and test the translation. Unfortunately, gettext requires installing the program in order to properly load the message catalogs, so we need to call make install.

./configure --prefix /tmp/amhello
make
make install

Now to check the translation simply run /tmp/amhello/bin/hello (you might need to change LC_ALL or LANGUAGES depending on your locale to see the translation).

$ LANGUAGE=he /tmp/amhello/bin/hello 
שלום עולם!

Final note about bootstrapping. When people checkout your code from the version control, many autogenerated files will be missing. The simplest way to bootstrap the code into a state which you can simple call ./configure && make is by using autoreconf:

autoreconf --install

Will add any missing files and run all the autotools friends (aclocal, autoconf, automake, autoheader`, etc.) in the right order. Additionally it will callautopointwhich copies the necessarygettextfiles that were generated when you calledgettextize` earlier in the tutorial. If your project is using ./autogen.sh script that call the autotools utilities manually, you should add a call to autopoind --force before the call to aclocal.

Finally, those are the files that end up version controlled in our example:

$ ls -RF
.:
configure.ac  Makefile.am  po/  README  src/

./po:
amhello.pot  he.po  LINGUAS  Makevars  POTFILES.in

./src:
gettext.h  main.c  Makefile.am

Refrences

Displaying Google Adsense in MediaWiki

This post shows how to insert code that displays ads in MediaWiki. The proposed methods use hooks instead of modifying the skin. This has two advantages:

  1. No need to modify each skin separately. This allows users to change skins and ads will be presented to them in the same logical place.
  2. It makes upgrades simpler. Hooks reside in LocalSettings.php which isn’t modified by MediaWiki version upgrades, unlike skins.

The examples below show how to insert ads into the header, footer and sidebar of each page. I’ve used the Google Adsense ad-serving code, but it could be easily replaced by the ad-serving code of any other ad network.
Continue reading Displaying Google Adsense in MediaWiki

Restricting SSH Access to rsync

Passphrase-less SSH keys allows one to automate remote tasks by not requiring user intervention to enter a passphrase to decrypt the key. While this is convenient, is posses a security risk as the plain key can be used by anyone who gets hold of it to access the remote server. To this end, the developers of SSH allowed to restrict via the .ssh/authorized_keys the commands that can be executed of specific keys. This works great for simple commands, but as using rsync requires executing remote commands withe different arguments on the remote end, depending on the invocation on the local machine, it gets quite complicated to properly restrict it via .ssh/authorized_keys.

Luckily, the developers of rsync foresaw this problem and wrote a script called rrsync (for restricted rsync) specifically to ease the restricting keys to be used only for rsync via .ssh/authorized_keys. If you have rsync installed, rrsync should have been distributed along side it. In Debian/Ubuntu machines it can be found under /usr/share/doc/rsync/scripts/rrsync.gz. If you can find it there, you can download the script directly from here. On the remote machine, copy the script, unpacking if needed, and make it executable:

user@remote:~$ gunzip /usr/share/doc/rsync/scripts/rrsync.gz -c > ~/bin/rrsync
user@remote:~$ chmod +x ~/bin/rrsync

On the local machine, create a new SSH key and leave the passphrase empty (this will allow you to automate the rsync via cron). Copy the public key to the remote server.

user@local:~$ ssh-keygen -f ~/.ssh/id_remote_backup -C "Automated remote backup"
user@local:~$ scp ~/.ssh/id_remote_backup.pub user@remote:~/

Once the public key is on the remote server edit ~/.ssh/authorized_keys and append the public key.

user@remote:~$ vim ~/.ssh/authorized_keys

(Vim tip: Use :r! cat id_remote_backup.pub to directly insert the contents of id_remote_backup.pub into a new line). Now prepend to the newly added line

command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding

The command="..." restricts access of that public key by executing the given command and disallowing others. All the other no-* stuff further restrict what can be done with that particular public key. As the SSH daemon will not start the default shell when accessing the server using this public key, the $PATH environment variable will be pretty empty (similar to cron), hence you should specify the full path to the rrsync script. The two arguments to rrsync are -ro which restricts modifying the directory (drop it if you want to upload stuff to the remote directory) and the path to the directory you want to enable remote access to (in my example ~/backups/).

The result should look something like:

command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding ssh-rsa AAA...vp Automated remote backup

After saving the file, you should be able to rsync files from the remote server to the local machine, without being prompted to for a password.

user@local:~$ rsync -e "ssh -i $HOME/.ssh/id_remote_backup" -av user@remote: etc2/

To things are needed to be noted:

  1. You need to specify the passphrase-less key in the rsync command (the -e "ssh -i $HOME/.ssh/id_remote_backup" part).
  2. The remote directory is always relative to the directory given to rrsync in the ~/.ssh/authorized_keys file.

Creating Self-Signed ECDSA SSL Certificate using OpenSSL

Before generating a private key, you’ll need to decide which elliptic curve to use. To list the supported curves run:

openssl ecparam -list_curves

The list is quite long and unless you know what you’re doing you’ll be better off choosing one of the sect* or secp*. For this tutorial I choose secp521r1 (a curve over 521bit prime).

Generating the certificate is done in two steps: First we create the private key, and then we create the self-signed X509 certificate:

openssl ecparam -name secp521r1 -genkey -param_enc explicit -out private-key.pem
openssl req -new -x509 -key private-key.pem -out server.pem -days 730

The newly created server.pem and private-key.pem are the certificate and the private key, respectively. The -param_enc explicit tells openssl to embed the full parameters of the curve in the key, as opposed to just its name. This allows clients that are not aware of the specific curve name to work with it, at the cost of slightly increasing the size of the key (and the certificate).

You can examine the key and the certificate using

openssl ecparam -in private-key.pem -text -noout
openssl x509 -in server.pem -text -noout

Most webservers expect the private-key to be chained to the certificate in the same file. So run:

cat private-key.pem server.pem > server-private.pem

And install server-private.pem as your certificate. If you don’t concatenate the private key to the certificate, at least Lighttpd will complain with the following error:

SSL: Private key does not match the certificate public key, reason: error:0906D06C:PEM routines:PEM_read_bio:no start line