CK knows Wayne

Dreams don't work unless you do

Published at by Christian Kruse, updated at
Filed under: dreams, work

This arrived already a week ago:

Dreams don't work unless you do

I couldn't agree more.


Giant bone

Published at by Christian Kruse, updated at
Filed under: Peer, bone, dog, giant

Can I have this giant bone, which is actually bigger than myself? Please?

Peer wanting this giant bone

Peer with his giant bone


Planting fruit trees

Published at by Christian Kruse, updated at
Filed under: fruit tree, hobby, planting

Yesterday (although it was raining) my wife and me have been planting our first fruit trees:

Cherry tree

Cherry tree with Peer

We planted two apple trees and two cherry trees as well as a currant bush, a gooseberry bush and a bilberry bush. I'm really curious when we will be able to harvest our first fruits!


Home brewing

Published at by Christian Kruse, updated at
Filed under: hobby, home brewing

A couple of weeks ago me and my friends Jeena and Sam Braumeister (Sam Brewing master) have been brewing ourselves.

Brewing master stirring

While we had a lot of fun during the brewing process


we encountered some problems when we were trying to reactivate the yeast. I bought a brewing set a year ago and since the package was broken (there was a small hole in it) the yeast was simply plain dead. Since we've been brewing on Saturdays it was simply impossible to get a new pack of beer yeast. We also didn't wanted to wait until Monday + x days for the yeast to arrive because we feared spontaneous fermentation.

Thus we bought a flask of Maisels Weisse and a flask of Erdinger Urweisse and tried to reactivate the yeast of them.

Brewing bucket

Surprisingly the yeast in the Maisels Weisse beer began to grow after we filled it into some warm water and fed it with sugar. Thus we added it into the brewing bucket and after four weeks we got, respecting the fact that this was our first brewing attempt, a decent beer.

And even more cool: I wrote Maisels an email telling them this story. I wasn't expecting much, didn't even think that they'd answer my email. But after a few days I got a nice answer, saying that they like it that I began home brewing and that they're proud that this was possible. Cool!


Line length in programming

Published at by Christian Kruse, updated at
Filed under: line length, programming

In the past I was a fan of long lines. I thought that modern editors are perfectly able to virtually wrap lines so that the code remains readable even when the lines exceed screen length. But since I began working on the PostgreSQL project I realized that short lines are much better readable nonetheless. You eyes don't get „lost“ and have to re-focus. Also it is easier to keep track of the surrounding context (which is important, particularly in programming).

So I did some research. And I found out that it seems to be a well known fact in the typographer environment that most people can read no more than 10 to 12 words per line before they have trouble differentiating lines from each other. For this assumption a word has an average length of 5 characters. While I think that this is too small (12 words with 4 characters each means we should add a line break at 60 characters) a relaxed version with 25% - 50% increase brings us to 15 words per line and thus 75 characters. Very close to the „80 characters per line“ rule of thumb propagated by the gray-bearded ones.

So to summarize: capping line lengths at 80 characters still makes sense. It improves the ability to read the source code and to scan through the source. It also increases the overall overview about the source code. I for one enabled highlighting of long lines (lines with more than 80 characters) in my Emacs:

(require 'whitespace)
(global-whitespace-mode t)

(setq whitespace-line-column 79)
(setq whitespace-style
      '(face lines-tail))

This results – with my color theme – in a nice red highlighting of the trailing part of the line:

Image of a long line highlighted in red


Easy, git-style syncing

Published at by Christian Kruse, updated at
Filed under: easy, git, syncing

I was disappointed by various syncing solutions. Either they've been morally questionable or they didn't really work or the took huge amounts of system resources. This made me thinking what I really need.

git-style syncing

git's syncing model is very easy. You can do a pull and a push. That's it. No magic, has to be executed by hand and it simply works. I wanted syncing to be evenly easy. This is why I created a new project: syncer. We use rsync to synchronize files between hosts and a simple shell script as a wrapper.

sync pull

We download changes via `sync pull`. This won't overwrite local changes.

sync push

We upload changes via sync push. This won't overwrite changes on the master.


Compile PostgreSQL documentation on Gentoo Linux

Published at by Christian Kruse
Filed under: documentation, gentoo, postgresql

For my work on the huge pages patch for PostgreSQL I also had to write some documentation. This was the first time I did this, and so I didn't have a working documentation building environment. At first the PostgreSQL documentation seems fairly easy: install requirements, go to directory, type make:

sudo emerge -av app-text/openjade app-text/docbook-sgml-dtd app-text/docbook-dsssl-stylesheets app-text/docbook-xsl-stylesheets libxslt
cd /home/ckruse/dev/postgresql/src/doc

But since Gentoo handles installation of different versions of packages, it wasn't that easy. I always got this error message:

openjade:postgres.sgml:3:55:W: cannot generate system identifier for public text "-//OASIS//DTD DocBook V4.2//EN"

After reading some source code I finally got the solution: you need the 4.2 slot of the docbook SGML DTD:

emerge app-text/docbook-sgml-dtd:4.2
cd /home/ckruse/dev/postgresql/src/doc

After that everything just compiles well.


Feature preview: huge pages with PosgreSQL

Published at by Christian Kruse
Filed under: huge, pages, postgresql

In the upcoming release PostgreSQL will support huge pages. When a program reserves memory normally this will be done in chunks of 4kb, for performance reasons. Since a program's address space is virtual the CPU and the OS have to translate virtual addresses to physical ones. This is done via a calculation (see Wikipedia for details). Since memory is accessed very often, this calculation is cached in a buffer, the Translation Lookaside Buffer, short TLB. Since this buffer can contain only a limited number of entries using large chunks of memory, especially since 64bit address rooms, can lead to serious performance hits. To counter this modern architectures are able to use larger pages, for example x86 can use pages of 2MB. While a memory segment of 1GB normally would require 262144 pages (1GB / 4KB), with huge pages of 2MB it would require 512 pages. A significant decrease of pages and thus the performance hit for address translation is much lower.

PostgreSQL will support this feature in 9.4. There will be a new configuration option huge_tlb_pages with the possible values on, off and try. While off will turn off this feature, on will enforce this feature. try will try to use huge pages and fall back to traditional 4k pages when failing to.

Keep in mind that you will need to configure the system to allocate huge pages. They can't be moved to swap, so the system will have to allocate them before you can use them:

$ head -1 /path/to/data/directory/
$ grep ^VmPeak /proc/4170/status
VmPeak:	 6490428 kB

6490428 / 2048 (PAGE_SIZE 2MB) are roughly 3169.154 huge pages, so you will need at least 3170 huge pages:

$ sysctl -w vm.nr_hugepages=3170

Sometimes the kernel is not able to allocate the desired number of huge pages, so it might be necessary to repeat that command or to reboot. Don't forget to add an entry to /etc/sysctl.conf to persist this setting through reboots.


New job at 2ndQuadrant

Published at by Christian Kruse
Filed under: 2ndquadrant, job, postgresql

For (at least) the next 6 months I work for 2ndQuadrant, where I will (partly) get paid for FOSS. Yay!


ejabberd: fixing the pubsub problem

Published at by Christian Kruse
Filed under: ejabberd, operation, pubsub, pubsub_node

After an update of ejabberd I got some very strange errors:

=ERROR REPORT==== 2013-11-11 09:50:38 ===
E(<0.583.0>:mod_pubsub:3863) : transaction return internal error: {aborted,

And loads of them, for each virtual host I'm hosting. The reason seems to be a changed mnesia table strcuture with a bug in checking for the old version after an upgrade: the table structure didn't get updated for me or failed to get updated, who knows.

The only way to get rid of this error I could find was dropping the table through the web admin interface. After a restart the table gets re-created. Don't worry about losing data: the pubsub module doesn't work with this corrupted table…