I struggle with RSI or tendinitis or something like that (never went
to a doctor since there is no way to deal with it). That means that I quickly suffer
pain in my hands when using standard keyboards. When using „ergonomic“ keyboards
(the name is sometimes worth just a laughter) like the Microsoft Ergonomic 4000
keyboard it is ok most of the time, but when my hands are cold or when I work for
a long time on the keybard the pain appears as well. Thus I've been looking for
alternatives. While things like the Kinesis truely ergonomic keyboard and
the Maltron keyboard may be good for my hands, the costs are really extensive. The
Maltron keyboard is available for about 700€ and the Kinesis is
available for about 400€ as well.
While I'm willing to spend that money to get less pain in my
hands, I stumbled upon another alternative: the
ErgoDox. This is an FOSS-Keyboard (meaning
that the hardware layout as well as the firmware is available under
a FOSS license), it is split (very good for the hands) and it has an
ergonomic layout. And the best of all: it is available for round about 200€.
A couple of weeks ago me and my friends Jeena
and Sam Braumeister (Sam Brewing master)
have been brewing ourselves.
While we had a lot of fun during the brewing process
we encountered some problems when we were trying to reactivate the yeast. I
bought a brewing set a year ago and since the package was broken (there was
a small hole in it) the yeast was simply plain dead. Since we've been brewing
on Saturdays it was simply impossible to get a new pack of beer yeast. We
also didn't wanted to wait until Monday + x days for the yeast to arrive
because we feared spontaneous fermentation.
Surprisingly the yeast in the Maisels Weisse beer began to grow after we filled
it into some warm water and fed it with sugar. Thus we added it into the
brewing bucket and after four weeks we got, respecting the fact that this was
our first brewing attempt, a decent beer.
And even more cool: I wrote Maisels an email telling them this story. I
wasn't expecting much, didn't even think that they'd answer my email. But
after a few days I got a nice answer, saying that they like it that I began
home brewing and that they're proud that this was possible. Cool!
In the past I was a fan of long lines. I thought that modern editors
are perfectly able to virtually wrap lines so that the code remains
readable even when the lines exceed screen length. But since I began
working on the PostgreSQL
project I realized that short lines are much better readable
nonetheless. You eyes don't get „lost“ and have to re-focus. Also it
is easier to keep track of the surrounding context (which is
important, particularly in programming).
So I did some research. And I found out that it seems to be a well
known fact in the typographer environment that most people can read
no more than 10 to 12 words per line before they have trouble
differentiating lines from each other. For this assumption a word
has an average length of 5 characters. While I think that this is too
small (12 words with 4 characters each means we should add a line
break at 60 characters) a relaxed version with 25% - 50% increase
brings us to 15 words per line and thus 75 characters. Very close to
the „80 characters per line“ rule of thumb propagated by the
So to summarize: capping line lengths at 80 characters still makes
sense. It improves the ability to read the source code and to scan
through the source. It also increases the overall overview about the
source code. I for one enabled highlighting of long lines (lines
with more than 80 characters) in my Emacs:
I was disappointed by various syncing solutions. Either they've
been morally questionable or they didn't really work or the took
huge amounts of system resources. This made me thinking what I
git's syncing model is very easy. You can do a pull and a
push. That's it. No magic, has to be executed by hand and it simply
works. I wanted syncing to be evenly easy. This is why I created a
new project: syncer. We use
rsync to synchronize files between hosts and a simple
shell script as a wrapper.
We download changes via `sync pull`. This won't overwrite local
We upload changes via sync push. This won't overwrite
changes on the master.
For my work on the huge pages patch for PostgreSQL I also had to write some documentation. This was the first time I did this, and so I didn't have a working documentation building environment. At first the PostgreSQL documentation seems fairly easy: install requirements, go to directory, type make:
sudo emerge -av app-text/openjade app-text/docbook-sgml-dtd app-text/docbook-dsssl-stylesheets app-text/docbook-xsl-stylesheets libxslt
But since Gentoo handles installation of different versions of packages, it wasn't that easy. I always got this error message:
openjade:postgres.sgml:3:55:W: cannot generate system identifier for public text "-//OASIS//DTD DocBook V4.2//EN"
After reading some source code I finally got the solution: you need the 4.2 slot of the docbook SGML DTD:
In the upcoming release PostgreSQL will support huge pages. When a program reserves memory normally this will be done in chunks of 4kb, for performance reasons. Since a program's address space is virtual the CPU and the OS have to translate virtual addresses to physical ones. This is done via a calculation (see Wikipedia for details). Since memory is accessed very often, this calculation is cached in a buffer, the Translation Lookaside Buffer, short TLB. Since this buffer can contain only a limited number of entries using large chunks of memory, especially since 64bit address rooms, can lead to serious performance hits. To counter this modern architectures are able to use larger pages, for example x86 can use pages of 2MB. While a memory segment of 1GB normally would require 262144 pages (1GB / 4KB), with huge pages of 2MB it would require 512 pages. A significant decrease of pages and thus the performance hit for address translation is much lower.
PostgreSQL will support this feature in 9.4. There will be a new configuration option huge_tlb_pages with the possible values on, off and try. While off will turn off this feature, on will enforce this feature. try will try to use huge pages and fall back to traditional 4k pages when failing to.
Keep in mind that you will need to configure the system to allocate huge pages. They can't be moved to swap, so the system will have to allocate them before you can use them:
6490428 / 2048 (PAGE_SIZE 2MB) are roughly 3169.154 huge pages, so you will need at least 3170 huge pages:
$ sysctl -w vm.nr_hugepages=3170
Sometimes the kernel is not able to allocate the desired number of huge pages, so it might be necessary to repeat that command or to reboot. Don't forget to add an entry to /etc/sysctl.conf to persist this setting through reboots.