Vim: Creating .clang_complete using CMake

The clang_complete plugin for Vim offers superior code completion. If your project is anything but trivial, it will only do so if you provide a .clang_complete file with the right compilation arguments. The easy way to do so is by using the cc_args.py script that comes with it to record the options directly into the .clang_complete file. Usually, one does

make CXX='~/.vim/bin/cc_args.py clang++'

However, the makefile generated by CMake doesn’t support the CXX configuration.

The solution is to call CMake with the CXX environment variable set:

CXX="$HOME/.vim/bin/cc_args.py clang++" cmake ..
make

Note that this will create the .clang_complete file in the build directory (I’ve assumed an out-of-place build), so just copy the file over to Vim’s working directory so it can find it. You’ll need to re-run CMake again (without CXX) to disable re-creating the .clang_complete file each time.

While looking for this solution, I first tried solving it by setting the CMAKE_CXX_COMPILER variable in CMake. However, for some strange reason, it didn’t like it, saying that the compiler wasn’t found (it shuns command-line arguments given in the compiler command).

The more I use clang_complete, the more awesome I find it. It has its quirks, but nonetheless it’s much simpler and better than manually creating tag files for each library.

Updated 6/1/2014: When setting CXX, use $HOME instead of ~ (to fix issues with newer versions of CMake).

Using std::chrono::high_resolution_clock Example

5 years ago I’ve shown how to use clock_gettime to do basic high-resolution profiling. The approach there is very useful, but unfortunately, not cross-platform. It works only on POSIX-compliant systems (especially not Windows).

Luckily, the not-so-new C++11 provides, among other things, an interface to high-precision clocks in a portable way. It’s still not a perfect solution, as it only provides wall time (clock_gettime can give per-process and per-thread actual CPU time as well). However, it’s still nice.

#include <iostream>
#include <chrono>
using namespace std;
 
int main()
{
    cout << chrono::high_resolution_clock::period::den << endl;
    auto start_time = chrono::high_resolution_clock::now();
    int temp;
    for (int i = 0; i< 242000000; i++)
        temp+=temp;
    auto end_time = chrono::high_resolution_clock::now();
    cout << chrono::duration_cast<chrono::seconds>(end_time - start_time).count() << ":";
    cout << chrono::duration_cast<chrono::microseconds>(end_time - start_time).count() << ":";
    return 0;
}

I’ll explain the code a bit. chrono is the new header file that provides various time- and clock-related functionality in the new standard library. high_resolution_clock should be, according to the standard, the clock with the highest precision.

cout << chrono::high_resolution_clock::period::den << endl;

Note that there isn’t a guarantee of how many ticks per second it has, only that it’s the highest available. Hence, the first thing we do is get the precision by printing how many times a second the clock ticks. My system provides 1000000 ticks per second, which is microsecond precision.

Getting the current time using now() is self-explanatory. The possibly tricky part is

cout << chrono::duration_cast<chrono::seconds>(end_time - start_time).count() << ":";

(end_time - start_time) is a duration (a newly defined type), and the count() method returns the number of ticks it represents. As we said, the number of ticks per second may change from system to system, so in order to get the number of seconds we use duration_cast. The same goes for microseconds in the next line.

The standard also provides other useful time units, such as nanoseconds, milliseconds, minutes, and even hours.

Installing Citrix Receiver on Ubuntu 64-bit

It’s a hassle.

The first step is to grab the 64-bit deb package from the Citrix website. Next, install it using dpkg:

~$ sudo dpkg --install Downloads/icaclient_12.1.0_amd64.deb

This results in the following error:

dpkg: error processing icaclient (--install):
 subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
 icaclient

This can be fixed by changing line 2648 in /var/lib/dpkg/info/icaclient.postinst:

         echo $Arch|grep "i[0-9]86" &gt;/dev/null

to:

         echo $Arch|grep -E "i[0-9]86|x86_64" &gt;/dev/null

And then execute:

~$ sudo dpkg --configure icaclient

Credit for this part goes to Alan Burton-Woods.

Next, when trying to actually use Citrix Receiver to launch any apps, I encountered the following error:

Contact your help desk with the following information:
You have not chosen to trust "AddTrust External CA Root", the
issuer of the server's security certificate (SSL error 61)

In my case, the missing root certificate was Comodo’s AddTrust External CA Root. Depending on the certificate used by the server you’re trying to connect to, you may be missing some other root certificate. Now, you can either download the certificate from Comodo, or use the one in /usr/share/ca-certificates/mozilla/AddTrust_External_Root.crt (they are the same). Either way, you should copy the certificate to the icaclient certificate directory:

$ sudo mv /usr/share/ca-certificates/mozilla/AddTrust_External_Root.crt /opt/Citrix/ICAClient/keystore/cacerts/

These steps got Citrix working for me, but your mileage may vary.

nameref Doesn’t Work Properly with Theorem Environments

I came across some unexpected behavior in nameref, the package responsible for creating named references, when used in conjunction with theorem environments such as the one provided by amsthm. For example, take a look at the following LaTeX document.

documentclass{article}
usepackage{amsmath,hyperref}

begin{document}
section{My Section}
newtheorem{theorem}{Theorem}
begin{theorem}[My Theorem]
label{theo:My}0=0
end{theorem}
This is a named reference: nameref{theo:My}
end{document}

You would expect the named reference to refer to the theorem’s name. However, in reality, it refers to the section’s name.


Continue reading nameref Doesn’t Work Properly with Theorem Environments

The New SourceForge

I’ve recently started upgrading my projects to SourceForge’s new “forge” software Allura. Upgrading existing projects has been available for quite some time (IIRC since July), but I thought I didn’t have time to deal with it until now. From my short experience with the “new” SourceForge, I came away with two short insights.
Continue reading The New SourceForge

Separate Numbering for Problems in LaTeX

By default, when using amsthm to create environments such as theorems, claims, and problems, they all use the same numbering. Sometimes this is annoying, as the numbering for the problems should generally be unaffected by the theorems present (or the lack of them). For example, the default behavior produces:

Problem 1
Problem 2
Theorem 3
Problem 4

where the desired behavior would be (in my opinion):

Problem 1
Problem 2
Theorem 1
Problem 3

Fortunately, this can be done by redefining the problem environment.

letproblem@undefined % undefines the existing problem environment
theoremstyle{definition} % set the style of the new environment to 'definition'
newtheorem{problem}{protectproblemname} % (re)define the 'problem' environment

The theoremstyle can be one of three defaults: plain, definition, and remark, or some custom style defined using newtheoremstyle.

See amsthm‘s documentation for more information, such as subordinate numbering (numbering per section).

Annoying Outlook Error

Sadly, there are occasions when I can’t use my beloved Gmail account and have to use Outlook to connect to a corporate Exchange server. Due to Exchange’s inability to efficiently operate with large mailboxes (at least that’s what the tech support there tells me), I have to resort to moving messages to a local PST. However, some time ago I started encountering the following error whenever I tried moving messages into a PST file:

Cannot move the items. The item cannot be moved. It was either already moved or deleted, or access was denied.

I’ve tried changing permissions, moving my PST around, repairing it with some tools that come bundled with Office (I read somewhere that such an error can be caused by corrupted PST files), and even tried creating a new PST. But, alas, the not-so-helpful message just wouldn’t go away.
Continue reading Annoying Outlook Error

Scanning Lecture Notes – Separating Colors

Continuing my journey to perfect my scanned lecture notes, I’ll be reviewing my efforts to find a good way to threshold scanned notes to black and white. I’ve spent several days experimenting with this stuff, and I think I’ve managed to improve on the basic methods used.

In the process of experimenting, I’ve come up with what I think are the 3 main hurdles in scanning notes (or text in general) to black and white.

  1. Bleeding. When using both sides of the paper, the ink might “bleed” through to the other side. Even if the ink doesn’t actually pass through, it might still be visible as a kind of shadow when scanning, just like when you hold a piece of paper in front of a light and are able to make out the text on the other side.
  2. Non-black ink. Photocopying blue ink is notoriously messy. Scanning it to b&w also imposes challenges.
  3. Skipping. This is an artifact that is sometimes introduced when writing with a ballpoint pen. It’s a result of inconsistent ink flow and is more rare with more liquid inks such as rollerballs or fountain pens.

Those issues can be visualized in the first three images. These images are the originals I’ve tested the various methods with. The other images are results of the various methods explained in this post and should convey the difference between them.
Continue reading Scanning Lecture Notes – Separating Colors

Scanning Lecture Notes – Compression

A new semester is about to begin, so I again set out to organize lecture notes and scan them. This time I intend to invest more time in investigating and perfecting this process. Hopefully, I’ll present my conclusions in a few posts, each focusing on a different aspect.

In the first post, I’ll discuss the various ways to compress scanned lecture notes. Because lecture notes (at least mine) aren’t especially colorful, as I only use one pen at a time, I want the result to be black and white (line art). This allows for readable lecture notes while preserving a small size per page (as you can see in Some Tips on Scanning Lecture Notes).
Continue reading Scanning Lecture Notes – Compression

Hebrew Support in Hyperref – Situation Review

It’s been a bit more than three years since I wrote about a workaround for getting hyperref to play (almost) nicely with Hebrew. Over the past few weeks, I saw a rising interest in this, and a few people contacted me regarding this issue. So I thought it’s a good opportunity to better document the current situation and possible approaches that should be further investigated, which I believe might lead to better solutions.
Continue reading Hebrew Support in Hyperref – Situation Review