Skip to main content


I've been running a local DNS server for the last decade. I do that for two reasons: first of all, to bypass censorship and surveillance, and second, to profit from the essentially instantaneous answers of a local DNS cache. The latter point is becoming increasingly important in the last years, since modern websites tend to invoke links to dozens of other domains all of which have to be resolved. A satisfactory web experience thus requires, first of all, a low-latency connection to the DNS server we use.

To specify “low”, let's have a look at the typical connections we have at home. With my vanilla ADSL2+, the latency to the fastest DNS servers around amounts to 8 ms for my desktop, which is connected to the router via a GB switch, or 12 ms for all devices connected via WIFI (802.11g). These values are not too bad, but not what I'd associate with “low latency”. At work, for example, we use a dedicated DNS server available in the intranet with a latency of 0.3 ms. Now we're talking.

I have the same speed at home since 2009, when I started using pdnsd. Unfortunately, the development of this caching DNS proxy has stopped 2012. In addition, DNSSEC was initiated 2010 and is now an indispensable part of the modern internet. To keep up with this development, I hence needed a local recursive DNS resolver that not only caches, but also validates. The article in c't 12/2017 about the validating, recursive and caching DNS server Unbound thus came just in time.

The setup provided by c't applied to Ubuntu, and proved to be incomplete anyway (see the comments at the end of the article). With the help of the Archwiki and Calomel, I came up with the following configuration that works as desired on Archlinux. On Debian or Fedora/CentOS, some of the initial steps may be not be necessary.

We first install unbound

pacman -S unbound

and enable the service

systemctl enable unbound.service

We need to edit the service unit, and do that by issuing

systemctl edit unbound.service

to create a drop-in_snippet. The command above automatically opens your $EDITOR, i.e., in my case vim. The content of the snippet should be:

ExecStartPre=sudo -u /usr/bin/unbound-anchor -a /etc/unbound/root.key

After saving the file, give it a meaningful name:

cd /etc/systemd/system/unbound.service.d
mv override.conf update_rootkey.conf

We can now turn to the configuration of Unbound. Replace the default configuration file /etc/unbound/unbound.conf by a file with the following content:

# Unbound configuration file
# See the unbound.conf(5) man page.
# See /etc/unbound/unbound.conf.example for a commented
# reference config file.
# The following line includes additional configuration files from the
# /etc/unbound/unbound.conf.d directory.

include: "/etc/unbound/unbound.conf.d/*.conf"

Next, we create this directory:

mkdir /etc/unbound/unbound.conf.d

Let's put the following four files in this directory:


## Basic configuration
        interface: ::0
                access-control: ::1 allow
                access-control: 2001:DB8:: allow
                # Beispiel f. ULA
                # access-control: fd00:aaaa:bbbb::/64 allow
                access-control: allow
                verbosity: 1

          name: "."
          # hopefully free of censoring and logging, definitely with DNSSEC Support:
          forward-addr:            # (CCC)
          forward-addr:             # (Universität der Künste)
          forward-addr:              # (Digitalcourage e.V.)
          forward-addr:               # (CCC)
          forward-addr:                # (Nürnberger Internet eXchange)
          forward-addr:               # (DNS Watch)
          forward-addr:               # (DNS Watch)
          forward-addr:             #
          forward-addr:             #
          forward-addr:             # (UncensoredDNS)
          forward-addr:               # (UncensoredDNS)
          forward-addr:               # (CCC)
          forward-addr:              # (OpenNIC)
          forward-addr:              # OpenNIC


## Advanced configuration
        verbosity: 1
        port: 53
        do-ip4: yes
        do-ip6: yes
        do-udp: yes
        do-tcp: yes

        root-hints: /etc/unbound/root.hints

        auto-trust-anchor-file: /etc/unbound/root.key

        hide-identity: yes
        hide-version: yes
        harden-glue: yes
        harden-dnssec-stripped: yes
        use-caps-for-id: yes

        qname-minimisation: yes

        cache-min-ttl: 3600
        cache-max-ttl: 604800

        prefetch: yes
        num-threads: 2

        include: /etc/unbound/adservers


## reduce edns packet size to help big udp packets
# over dumb firewalls

edns-buffer-size: 1232
max-udp-size: 1232


# Performance optimization
# ` <>`_
                # use all CPUs
                num-threads: 8

                # power of 2 close to num-threads
                msg-cache-slabs: 8
                rrset-cache-slabs: 8
                infra-cache-slabs: 8
                key-cache-slabs: 8

                # more cache memory, rrset=msg*2
                rrset-cache-size: 200m
                msg-cache-size: 100m

                # more outgoing connections
                # depends on number of cores: 1024/cores - 50
                outgoing-range: 100

                # Larger socket buffer.  OS may need config.
                so-rcvbuf: 8m
                so-sndbuf: 8m

                # Faster UDP with multithreading (only on Linux).
                so-reuseport: yes

We're almost done now. Two cronjobs in /etc/cron.weekly complete the configuration:


# Updating root hints.

###[ root.hints ]###

curl -sS -L --compressed -o /etc/unbound/ ` <>`_

if ` $? -eq 0  <>`_; then
  mv /etc/unbound/root.hints /etc/unbound/root.hints.bak
  mv /etc/unbound/ /etc/unbound/root.hints
  unbound-checkconf >/dev/null
  if ` $? -eq 0  <>`_; then
        rm /etc/unbound/root.hints.bak
        systemctl restart unbound.service
        echo "Warning: Errors in newly downloaded root hints probably due to incomplete download:"
        mv /etc/unbound/root.hints /etc/unbound/
        mv /etc/unbound/root.hints.bak /etc/unbound/root.hints
  echo "Download of unbound root.hints failed!"


# Updating adserver list.

###[ adservers ]###

curl -sS -L --compressed -o /etc/unbound/ "` <>`_"

if ` $? -eq 0  <>`_; then
  mv /etc/unbound/adservers /etc/unbound/adservers.bak
  mv /etc/unbound/ /etc/unbound/adservers
  unbound-checkconf >/dev/null
  if ` $? -eq 0  <>`_; then
        rm /etc/unbound/adservers.bak
        systemctl restart unbound.service
        echo "Warning: Errors in newly downloaded adserver list probably due to incomplete download:"
        mv /etc/unbound/adservers /etc/unbound/
        mv /etc/unbound/adservers.bak /etc/unbound/adservers
  echo "Download of unbound adservers failed!"

The adserver component is of course optional, but I've found it to be a very efficient way of blocking ads. I'll compare the various possibilities to block ads in a forthcoming post.

For the moment, let's concentrate on the core competences of our new DNS server. To do so, we first start it by issuing

systemctl start unbound.service

We can test the nameserver on the command line using either dig or its near drop-in replacement drill.

dig +dnssec +multi @localhost
drill -D @localhost

What's essential here are the first two lines and the entries in rcode and flags: 'NOERROR' and 'ad', with the latter standing for 'Authenticated Data'. In other words, the DNS response is authentic because it was validated using DNSSEC. The RRSIG blocks provide, among other data, the public key of the domain as explained here.

Let's try that with a domain which is not validated by DNSSEC:

dig +dnssec +multi @localhost

NOERROR, but no 'ad' flag. Quite all right.

And now a domain with a broken/bogus DNSSEC record:

dig +dnssec +multi @localhost

Status: SERVFAIL. Works as well.

Last but no least, let's test the cache of unbound:

for i in $(seq 1 5); do dig | grep 'Query time' | awk '{print substr($0, index($0, $2))}'; done
Query time: 746 msec
Query time: 0 msec
Query time: 0 msec
Query time: 0 msec
Query time: 0 msec

Works. ;)

If the command line appears to be too cryptic, we can also test the basic DNSSEC functionality with a browser:

For addresses with broken/bogus DNSSEC records, such as this one, the browser should just display an ERR_NAME_NOT_RESOLVED page. It does? Excellent.

Still...that page is depressing. Let's boost our morale by visiting :


Thank you, Matthäus ;) .

Representative surveys

Statistical surveys are a standard tool of sociology, and have been the subject of extensive research. In the hands of professionals, the results of these surveys can be surprisingly accurate. As a result, surveys have attained the status of the oracle of Delphi, and people fervently believe in them. Naturally, this development has made surveys an attractive tool for manipulating public opinion. The standard way to do this is to load the questions with a moral obligation. Don't you agree that the internet should be regulated to deprive extremists of their safe spaces online? No? Really, what a kind of person are you? Don't you ever think of the children?

However, unexpected results of surveys do not always have sinister reasons, but may instead simply reflect the incompetence of the inquirer. In particular, the most elementary rule for designing a survey is frequently forgotten: namely, that those taking part in the survey have to understand the questions. Sounds obvious, doesn't it? But is it?

An example: the recent news of heise online that only(!) 16% percent of all Germans encrypt their emails (Umfrage: Nur 16 Prozent der Deutschen verschlüsseln ihre E-Mails). This survey was conducted by Convios Consulting on behalf of United Internet (UI), one of the largest internet and mail providers in Germany.

UI claims that about 750,000 of their users have generated PGP key pairs. That's a very impressive number, particularly since according to UI, only 4.7 million keys “exist” worldwide. The UI users would thus account for 16% of all PGP keys. Doesn't that demonstrate very nicely that UI's encryption initiative introduced in August 2016 is highly successful?

Well, the whole reason for the survey was to create exactly this impression. I have no doubts that the numbers quoted above are correct, but what do they mean?

First of all, the number reported for the existing keys worldwide only accounts for the keys that have been deposited on key servers. Nobody can estimate how many keys have actually been generated or are in use. That's quite different in the case of the UI encryption scheme, which is based on mailvelope, and the storage of the public key of the user in a database located on a UI server. The number given above is thus the total number of UI customers with a PGP key, unless they use a separate key in a stand-alone MUA (of which I know two ;)).

There are about 40 million email users in Germany. According to UI, about half of them use GMX or WEB.DE, which seems reasonable as UI is reported to have close to 20 million customers. Now, let's suppose that all UI customers who have generated a key are also actually using it to encrypt their mail. In this unlikely case, 3.75% of all UI users would encrypt their mail, much less than the “only 16%” of their survey. Obviously, that must mean that 28.25% of the other 20 million email users in Germany, who are mostly customers of the German Telekom, Google, and Microsoft, encrypt their mail. Right?

Of course not. Try to ask arbitrary Gmail users if they encrypt their mail. 84% will look at you with with blank eyes, but 16% will recognize the word and confirm that they do, YES! Ask them afterwards if they know the difference between transport and end-to-end encryption. I guarantee that you will soon get tired of asking people because even after several hundreds you won't find a single one who can answer your second question...

How many people do encrypt their mails? I don't think there exist any bona fide surveys on that topic. I can only provide anecdotal evidence with very limited statistical significance. On the other hand, I've been a serious advocate of end-to-end-encryption since about 15 years. I've written tutorials and motivated many of my personal contacts to use end-to-end encryption in email and messenging. Well, some would say I forced them at gunpoint. But that would be exaggerated...

I have currently 49 personal contacts with public PGP keys, and 16 business contacts. That doesn't sound too bad, does it? However, 17 and 4 of these keys are expired, leaving 32 and 12, respectively. Subtracting keys whose pass phrases have been forgotten by their users or were otherwise disposed of leaves 19 and 7. Some of my contacts have passed away, are retired, or I've simply lost touch, leaving in the end 4 and 5 whith which I can, in principle, exchange end-to-end encrypted mails. Actually, however, there are only three persons with whom I regularly exchange encrypted mails: my patent attorney at work and my fellow PdeS (that's why they have that label) in actual life.

Three out of 65 with an actively used key, but of how many without any clue what that even means? I don't want to spent time on the question of how I could count the number of unique addresses in my mail folders over the past few years. But obviously, this number would be in the several hundreds. In other words, the total percentage of people employing end-to end-encryption in my emails is way lower than 1%. And if I wouldn't be interested in these kind of things, and if I wouldn't be a scientist, this percentage would be exactly zero. Not 16%. Not 0.16%. Zero.

Can we find out how many people really encrypt their mails by a survey? Not really. If less than 100 ppm of all people encrypt (which is the number I find most plausible), we would need a mega-survey over 50000 people to include at least 5 people who actually do encrypt, and that's never going to happen. And don't let them tell you that the rules of statistics somehow don't apply there and representative surveys can answer all of these questions as if by magic. That's bullshit. All of it.

Debian 9

Stretch is stable. Testing is now called Buster.

sed -i 's/stretch/buster/g' /etc/apt/sources.list

I could equally well just use 'testing', but for some presumably deeply rooted psychological reasons, I like the codenames better.

For my veteran netbook mini, buster is now the 7th incarnation of Debian or Debian-derivatives in its 9 years of operation: Etch, Lenny, Squeeze, Wheezy, Jessie, Stretch, Buster. I'm sure it will also run Bullseye. ;)

One does not simply exit vim

A few days ago, stackoverflow has hit a major milestone: the community has helped one million developers to exit vim. If that isn't reason for celebration, nothing is.

'Of course, these aren't real programmers, since those use something else entirely.'

Surely the mighty ed?

'Not even close.'

For those still interested in the grandfather of vi (aka the most user-hostile editor ever created): here's an excellent little tutorial. By the way, you can exit ed by the canonical quit command of Unix applications: q. Much more intuitive than vim!

I'll write about the editors I'm using (and why) in the near future.

TCAD station, part II

As a testbed for commercial TCAD software, we will use a standard desktop PC equipped with an i4790 CPU and 32 GB RAM, and an Nvidia 710 GT to connect the monitor via DVI. As a substitute for Redhat Enterprise Linux (RHEL) required by all of the packages we're about to evaluate, I'm going to install CentOS.

I create a bootable USB stick by issuing

dd bs=4M if=CentOS-7-x86_64-Minimal.iso of=/dev/sdc status=progress && sync

The first stick doesn't work, and it takes us perhaps half an hour to realize that it's the fault of the stick, not the system. A second one works right away. I manually partition the disk, as in my installation in a virtual machine, choosing btrfs as filesystem for both system and home partitions. I add an UEFI boot partition as well as a swap partition.

During the first boot from hard disk, the system throws an error message concerning nouveau (the open-source driver for the Nvidia graphics card) and subsequently hangs with a kernel soft lock signified by the repeated message that CPU #i stuck for 120s. After a hard reset, the system boots and behaves as expected, but when I boot again to activate the new kernel installed by yum, the system hangs again. Since the hangup is always preceded by the error message from the nouveau driver, we remove the Nvidia card and reboot. Now the systems hangs without any message. Great! A cold boot (with the Nvidia card installed again) seems to help at first, but later the system hangs repeatedly and finally refuses to boot to the login screen at all.

What I thought to be done within an hour has already taken most of this Friday morning. Since the Nvidia card does not seem to responsible for these problems, I start to suspect the btrfs filesystem, which Redhat still considers to be a technology preview. In the afternoon, I thus reinstall the system, this time choosing the default filesystem XFS. And indeed, while I still get the error message from nouveau, the system boots up without any of the previous symptoms. I go home with the conviction that I've solved the problem.

Saturday morning, curiosity gets the better of me and I decide to check if the system still runs. It does, but htop shows that the uptime is only 49 min instead of the expected 13 h. Weird! I check again Sunday morning, and again the uptime is less than an hour. At the same time, I notice that the filesystem usage seems to increase by roughly 1 GB per day. Aha! An 'll /var/crash' confirms my suspicion: the system crashes and dumps the kernel roughly every 2 or 3 hours.

Core dumps can be analyzed to determine their cause. Following the Redhat tutorial and issuing the command

crash /usr/lib/debug/lib/modules/3.10.0-514.16.1.el7.x86_64/vmlinux /var/crash/\:35\:53/vmcore

I get the following crash report:

KERNEL: /usr/lib/debug/lib/modules/3.10.0-514.16.1.el7.x86_64/vmlinux
        DUMPFILE: /var/crash/  [PARTIAL DUMP]
                CPUS: 8
                DATE: Mon May  1 07:35:42 2017
          UPTIME: 08:27:43
LOAD AVERAGE: 0.05, 0.03, 0.05
           TASKS: 224
         RELEASE: 3.10.0-514.16.1.el7.x86_64
         VERSION: #1 SMP Wed Apr 12 15:04:24 UTC 2017
         MACHINE: x86_64  (3600 Mhz)
          MEMORY: 31.9 GB
           PANIC: "Kernel panic - not syncing: Hard LOCKUP"
                 PID: 0
         COMMAND: "swapper/6"
                TASK: ffff880174a9edd0  (1 of 8)  [THREAD_INFO: ffff880174ab8000]
                 CPU: 6

This diagnosis lets me find the cause of the core dumps very easily: it's an unresolved bug reported in September 2016 and given high priority by the developers. The bug has been found in version 7.2.1511 but has not been fixed even in the current version 7.3.1611. However, the bug reporter and others have identified the nouveau driver to be the culprit. And indeed: after removing the GT710 and rebooting, the system does not suffer from any further lockups and core dumps:

 ob@testbed:~$ uptime
19:00:08 up 17 days,  9:00,  2 users,  load average: 0.00, 0.01, 0.05

I have to admit that this experience was an unexpected surprise. CentOS, as a binary compatible clone of RHEL, has the reputation of being the very model of a conservative Linux distribution, and thus to be a paragon of stability and reliability. Consequently, I had expected outdated software, but not buggy implementations of core packages, and critical bugs that are open for 8 month without eliciting any response from a developer.

Thinking twice, however, I realize that I should not have been surprised at all. About ten years ago, we decided to switch our core servers from SUSE Enterprise Linux (SLES) to OpenSUSE since we were entirely frustrated with the support and bug fixing policy of SLES, despite the fact that we payed a handsome amount to Novell every single year. Personally, I'm not too fond of OpenSUSE either, but the core servers don't overly concern me. Our compute servers, for which I'm responsible, are running Debian Testing, and in view of the minimal administrative effort required over the past ten years, I congratulate myself for this decision.

TCAD station, part I

You certainly know the old tune on the lack of “professional” software for Linux, with “professional“ being usually an implicit synonym for Microsoft Office and the Adobe Creative Suite. For a scientist or engineer, people joining this chorus appear to be misinformed and to be motivated by ideology rather than reality. In technically oriented fields, software is in fact mostly cross-platform or even developed primarily for Linux. That's true in particular for multithreaded software with non-negligible demands in terms of computational resources and five- to six-figures price tags. Examples include Maxwell solvers, multiphysics solutions, and TCAD packages.

Commercial software for Linux is usually certified only for one of the enterprise Linux distributions, namely, Redhat and SUSE Enterprise Linux (RHEL and SLES). Some of this products turn out to be actually distribution-agnostic, meaning that they run without any problems also on, e.g., Debian. But in many other cases, the software only runs and even only installs on systems with the distribution it was developed for. I've learned that the hard way, wasting an entire day trying to get the commercial finite-difference time-domain simulation package FDTD solutions to work under Debian Stretch. In the end, we've used Meep instead.

I've vowed that this wouldn't happen again, and since we are in the process to evaluate some selected TCAD solutions (all of which are certified for RHEL only), it seemed to be wise to set up a test server running CentOS (a binary-compatible clone of RHEL). The TCAD software requires a graphical interface, but I did not intend to perform a standard installation, which results in a complete Gnome desktop suitable for a workstation rather than a compute server. For servers, I prefer the installation to be as lightweight as possible. For example, I usually install the tiling window manager wmii if a graphical interface is required or desirable on a server.

Since I'm not at all familiar with CentOS and the available software, I decided to first set up a virtual machine to look for possible pitfalls. For the base instalIation, I've downloaded and installed the minimal ISO which I've then, upon first boot, updated with:

yum update
grub2-mkconfig -o /boot/grub2/grub.cfg

Why the reconfiguration of grub? Well, the update installed a new kernel, but CentOS did not automatically update the grub configuration file. Weird, but true.

CentOS offers three tiling window manager, two of which I'm familiar with: i3 and xmonad (never even heard about spectrwm). On second thought, however, I realized that it may be a better idea to install a more conventional desktop to give the TCAD testers an environment they feel comfortable with. XFCE seemed to be a reasonable compromise and can be installed with these few steps:

yum install epel-release -y
yum groupinstall "X Window System" -y
yum groupinstall "Xfce" -y
systemctl get-default
systemctl set-default
systemctl isolate

The last step seamlessly starts the X Window system and thus catapults one into the graphical desktop. Slick! Now we only want a resolution higher than the oldfashioned 1024x768 offered by the default (VESA) driver. In other words, we need to install the Virtualbox guest additions:

yum install dkms
yum groupinstall "Development Tools"

To install the guest additions, I first inserted the 'Guest Additions CD Image' in the 'Devices' menu and then downloaded the image. After the download, I mounted the image and compiled the guest additions:

mkdir /media/vboxadditions
mount /dev/cdrom /media/vboxadditions
cd `/media/vboxadditions <file:///home/cobra/Documents/media/vboxadditions>`_

Still no higher display resolutions available? The script in this thread enabled me to activate the 'Auto-resize Guest Display' option in the 'View' menu, which finally allowed me to use the desired full HD resolution (on a WQHD monitor):

Diplay_Name=`xrandr | grep connected | cut -d' ' -f1`
Display_Spec=`cvt 1920 1080 | grep Modeline | cut -d' ' -f2 |cut -d '"' -f2`
Display_Params=`cvt 1920 1080 | grep Modeline | cut -d' ' -f2-18 | sed s/'"'//g`

xrandr --newmode $Display_Params
xrandr --addmode $Diplay_Name $Display_Spec
xrandr --output $Diplay_Name --mode $Display_Spec

In the second part of this post, I will write about the installation of CentOS on the physical server we have reserved for the evaluation stage. From the (largely) pleasant experience with the virtual machine, I expected this task to be entirely straightforward. I thought to be done in an hour, including configuration of user accounts as well as the sshd and vncd daemons for remote access. Well ... it took more than one day. Stay tuned. ;)

Lingua franca

Last week, a colleague of mine who's using Antergos showed me a particular error message he's getting when attempting to update the system, and asked whether I could help him to resolve it. However, I had never seen this error message before, and recommended to search for it. I've realized only later that the error message had been in German.

Let me give you one advice: whatever Linux distribution your are using, whatever your native tongue may be, do not chose German. Nor Spanish, French, Hindi, or Chinese. Do not use any system language other than English!

Why? Well, just compare the active forums on the generic, primarily english-speaking Arch forum with the German one. Right now, it's about 70 topics compared to 3. That ratio also applies to the number of answers you are likely to get when searching for a particular problem with Archlinux in English or German. It's proportionally less likely to get a response to questions not posted in English.

In my 30 years of computer usage, I've in fact never used any system language other than English. Oh well, that's not entirely true: the Macintosh II and the 386 Mitsubishi notebook I've worked with in 1992 had a Japanese user interface, but that had not be my decision. Other than those, I've always used English regardless of the OS. I believe that this decision is the major reason why I've usually fared better, regarding computer installations, than most of my contemporaries, although we nominally did the same. As a matter of fact, one often finds plenty of solutions in the internet when searching for error messages in English, but hits nothing in any other language.

Æchter Senf

As a native German, I'm born with the genetic predisposition to love Bratwurst, Sauerkraut, and Kartoffelsalat. However, I insist that the Bratwurst, regardless of its provenience, is served with Mustard, and not just any one. Unfortunately, the standard mustard in Germany (“mittelscharfer Senf”) is a feculent substance that sickens me even if I only think about it. It tastes just like one would imagine a vinegar-salt-sugar paste with a homeopathic dose of mustard powder would taste: disgusting.

What I expect from mustard is really very simple: it should taste of mustard (and not exclusively of vinegar) and has the effect of mustard, i.e., I want to experience the familiar nose-tingling sensation one also knows from horseradish or wasabi. And that's to be expected, because all these condiments contain C4H5NS (3-Isothiocyanato-1-propene) or allyl isothiocyanate.

Wasabi has probably the highest C4H5NS content among the brassicaceae plants, and it was in fact in Japan where I discovered my love for the effect this substance has on the “mucous membranes of the sinuses”, as medically oriented people would put it. I've been invited several times to high-end sushi places in which wasabi was freshly prepared right at the dining table using a shark-skin oroshigane. I'm not too fond of sushi in general, but tekkamaki and unakyumaki are absolutely delicious when served with a proper amount of fresh wasabi.

And I soon found that the Japanese are serious mustard aficionados as well. Karashi, for example, is just plain brown mustard powder mixed with water and is served with many popular dishes, for example oden. Sausages are also highly popular and are usually consumed with a variety of excellent mustards available at any 7-11.

And then I returned to Germany just to discover that we live in a culinary desert. We have great sausages, oh yes, but where's the mustard of equivalent quality? A regular super market offers a dozen different varieties of “mittelscharfer Senf” (all tasting exactly the same), one sweet Bavarian variant (sugar paste with one or two mustard seeds), and two or three Dijon type mustards such as Löwensenf and Maille Dijon Originale. The latter ones are the least despicable, but I'm still not satisfied with their effect on my nose.

But who cares what the local market offers, this is the age of online shopping! Right?

Well, I order a lot of my food in the internet, and consequently also tried various offers I found online. Einbecker produces mustard with excellent taste, just as my favorite chili shop. I've set my hope in the latter as there were rumours in the forum that Michael Dietz, the founder of Chili Food, planned to create a truly hot mustard. In the end, it turned out to be just another of the so-called hot mustards that are simply pimped with chili.

But chili and mustard are two totally different beasts. As stated above, the desirable effect of mustard relies on the presence of allyl isothiocyanate. Chili, in contrast, is powered by C18H27NO3 [(6E)-N-[(4-Hydroxy-3-methoxyphenyl)methyl]-8-methylnon-6-enamide] or capsaicin. Their effect is not only different: actually, it is entirely distinct. I don't understand why it seems to be increasingly popular to simulate the former with the latter. Imagine the opposite: somebody asking for Tabasco getting a pouch of Löwensenf.

In my desperation, I've even ordered Karashi and Colman's via Amazon. But come on: besides the obscene price tag, I really do not want to depend on a US internet retailer for a satisfactory Thüringer experience.

And then I found ECHTER LOOSER SENF and began to see the light. Hell, yes, why should I not produce my own mustard? How difficult can it be?

As it turned out, it's about as difficult as making a coffee. I've chosen this comparison as we have to grind the mustard seeds, which can be done with a mill or a mortar. Talking of mustard seeds: that's of course the central ingredient for a mustard. ;) Since I aimed to get a really, genuinely, absolutely hot one, brown mustard seeds were required that can be obtained from specialized spice shops such as this one. A plain hot mustard can then be obtained with only a few more basic ingredients. Prior to its preparation, however, it's important to realize that a mustard has to be enjoyed when fresh. Certainly, one can keep it for months and perhaps even years and it never gets “bad” in a microbiological sense. But after a few days the bite of it is gone! So put your freshly prepared mustard in the fridge and wait overnight to let it mellow, but don't keep it longer than a week.

With that in mind, I recommend to prepare only a minute amount sufficient for just one or two meals:

Brown mustard seeds:                    25 g
Water:                                  30 ml
Wine:                                   5 (10) ml
Vinegar:                                15 (10) ml
Salt:                                   1.5 g
Sugar:                                  0.15 g

Grind the seeds in a mill or mortar (I recommend the latter). Mix water, vinegar and wine with salt and sugar. Pour the mixture over the mustard powder and stir carefully. Put in fridge for one night but let it warm to room temperature before digestion. Let me add that the choice of wine and vinegar is important, as these two ingredients are largely responsible for the character of our mustard. This fact is also the reason why I allowed for some flexibility in their relative amounts.

Server security

I really didn't expect that, but my recent post about our new server attracted more questions than all posts in 2016 combined. I thought that people interested in such old-school IT issues would be essentially extinct, but apparently a few still exist.

In what follows, I try to provide some answers. I've grouped the questions such that they revolve around the same topic even if they were not asked by the same person. My answers are all short except for the last one, where I elaborate on server security.

Can every 'ordinary' citizen rent such a server? Or do we need a trade license (Gewerbeschein)? Do we need special certificates?

No, anyone can rent a server. Technically, however, it is not advisable to run a server without some basic knowledge of system and network administration. I come to that point in more detail below.

What can I do with my own server? What benefit does it offer?

You can do anything you can imagine to do with a computer. And what's most important: it's yours, and no Google/Facebook/Dropbox will suddenly discontinue the service to “optimize the user experience”. And if your hoster goes bankrupt, just move to the next one—you can always rent another server (unless the government decides to ban private servers to “fight cyberterrorism” or whatever else is en vogue).

With your own server, you could, for example, host your blog, as I do. You could run a mail server, set up your own cloud, provide groupware for yourself and your family, or use it as game server and communicate via IRC, as we do at You could also install a Jabber server supporting OMEMO to provide a Skype and Whatsapp replacement for your family and friends that is guaranteed to be safe from eavesdropping.

What the heck is this Jessie and Stretch thing? Do I need to know that?

Perhaps not. But I'm not sure. If you don't know, I imagine that you are not, in all likelihood, very familiar with GNU/Linux. And that's not the ideal basis for administering your own server. Well, you could rent a Windows server, of course. I have no idea for what reason, though.

Can we rent a server anonymously?

Yes, for example here. Note: in case you want to register a domain (and who doesn't?), you want to do that anonymously as well. Otherwise your identity is revealed by a simple whois request (check, for example, 'whois').

How do you connect to the server? Can I use it also by ftp? Or by the Windows Explorer? What about smartphones?

One has to distinguish between the interface used to administer the server from the services provided by it. Regarding the former, I connect exclusively via ssh (or better to say, via mosh, an ssh replacement). I also use this way to copy files via the ssh-based tools scp, sftp and sshfs. You can get ssh clients also for Android and iOS, and you can thus administer your server from anywhere you like. Concerning the latter, you can use any protocol for which you have configured the corresponding service—in this context the smb server to allow access to the user's files via the Windows Explorer. However, I would definitely not recommend that. Rather, I'd use winscp or implement user access to files by webdav over https.

Can hackers attack the server?

Whatever you call them: there will be plenty of people trying to get access to your server. For example, in the three days in which our new server was running in its default configuration, 'lastb' revealed 6742 login attempts via ssh. Fortunately, our hoster had set a passphrase that was definitely better than the most popular one.

What did you do to secure the server and to avoid hackers taking it over?

The measures I usually take are all very simple and do not require membership of the inner circles of server adminship. The core principle is to minimize exposure by scrutinizing the software base.

What do I mean with that statement? I can illustrate that best with an example from one of my Arch systems:

➜  ~ arch-audit
Package bzip2 is affected by ["CVE-2016-3189"]. Update to 1.0.6-6!
Package jasper is affected by ["CVE-2016-9591", "CVE-2016-8886"]. High risk!
Package libtiff is affected by ["CVE-2016-10095", "CVE-2015-7554"]. Critical risk!
Package openjpeg2 is affected by ["CVE-2016-9118", "CVE-2016-9117", "CVE-2016-9116", "CVE-2016-9115", "CVE-2016-9114", "CVE-2016-9113"]. High risk!
Package openssl is affected by ["CVE-2016-7055"]. Low risk!

As you see, libtiff is listed as critical, and the exploits partly date from 2015. Better to get rid of it, right? Sure, but:

➜  ~ whoneeds libtiff | tr -d '\n'
Packages that depend on [libtiff]  aarchup  artha  auctex  awoken-icons  blueman  chromium  clipit  conky  conky-colors  cups  darktable  djvulibre  emacs  emacs-minimap  engrampa  feh  firefox  galculator  gimp  gimp-webp  gksu  gnome-keyring  gnome-themes-standard  gnuplot  gparted  gpicview  graphviz  gsimplecal  gst-libav  gst-plugins-good  gtk-engine-murrine  gtk-engines  gtk-theme-orion-dark  gtk2-perl  gtk3-print-backends  guake  gucharmap  gvfs  gvim  hplip  hsetroot  inkscape  keepassx2  kodi  libbpg  libcaca  libreoffice-fresh  lxappearance  lxappearance-obconf  lxinput  lxrandr  lxterminal  lyx  masterpdfeditor-qt5  mirage  mpv  mupdf  mupdf-tools  netpbm  network-manager-applet  nitrogen  numix-circle-icon-theme-git  obconf  obkey  obmenu-generator  openbox  openbox-themes  orage  owncloud-client  pavucontrol  pcmanfm  portfolio  povray  pstoedit  pstotext  pychess  python-matplotlib  python-pillow  python-scikit-image  python-seaborn  qpdfview  rawtherapee  ricochet  scribes  scribus  scrot  seahorse  spacefm  spyder3  sqlitebrowser  terminator  texlive-bibtexextra  texlive-core  texlive-fontsextra  texlive-formatsextra  texlive-games  texlive-genericextra  texlive-htmlxml  texlive-humanities  texlive-latexextra  texlive-music  texlive-pictures  texlive-plainextra  texlive-pstricks  texlive-publishers  texlive-science  tint2  tumbler  vertex-themes  vesta  virtualbox  volumeicon  webkitgtk2  wxpython  xfce4-notifyd  xfce4-terminal  yelp  zenity  zim

As you see, essentially everything depends on this package. No way to get rid of it on a desktop system! But surely that's no issue on a pure command-line system like our server, right?

$ aptitude why libtiff5
i   webalizer Depends libgd3 (>= 2.1.0~alpha~)
i A libgd3    Depends libtiff5 (>= 4.0.3)

Ok, let's remove webalizer. But after that:

$ aptitude why libtiff5
i   pinentry-gtk2      Depends libgtk2.0-0 (>= 2.14.0)
i A libgtk2.0-0        Depends libgdk-pixbuf2.0-0 (>= 2.22.0)
i A libgdk-pixbuf2.0-0 Depends libtiff5 (>= 4.0.3)

Who installs pinentry-gtk2 on a system without X server? WHO?

/usr/bin/apt-get --auto-remove purge libtiff5
Requested-By: cobra (1000)
Install: pinentry-curses:amd64 (1.0.0-1, automatic)
Purge: libcroco3:amd64 (0.6.11-2), libpangoft2-1.0-0:amd64 (1.40.3-3), libcups2:amd64 (2.2.1-4), libimlib2:amd64 (1.4.8-1), w3m-img:amd64 (0.5.3-34), libgtk2.0-bin:amd64 (2.24.31-1), libgdk-pixbuf2.0-0:amd64 (2.36.3-1), libpixman-1-0:amd64 (0.34.0-1), libsecret-1-0:amd64 (0.18.5-2), librsvg2-common:amd64 (2.40.16-1), gnome-icon-theme:amd64 (3.12.0-2), libavahi-common-data:amd64 (0.6.32-1), libgail-common:amd64 (2.24.31-1), libavahi-common3:amd64 (0.6.32-1), libgtk2.0-0:amd64 (2.24.31-1), libxcursor1:amd64 (1:1.1.14-1+b1), libthai-data:amd64 (0.1.26-1), libxcb-shm0:amd64 (1.12-1), libid3tag0:amd64 (0.15.1b-12), libsecret-common:amd64 (0.18.5-2), libgail18:amd64 (2.24.31-1), libxcb-render0:amd64 (1.12-1), fontconfig:amd64 (2.11.0-6.7), libtiff5:amd64 (4.0.7-5), libatk1.0-0:amd64 (2.22.0-1), libpangocairo-1.0-0:amd64 (1.40.3-3), librsvg2-2:amd64 (2.40.16-1), pinentry-gtk2:amd64 (1.0.0-1), libgif7:amd64 (5.1.4-0.4), hicolor-icon-theme:amd64 (0.15-1), libthai0:amd64 (0.1.26-1), libgdk-pixbuf2.0-common:amd64 (2.36.3-1), libgtk2.0-common:amd64 (2.24.31-1), libgraphite2-3:amd64 (1.3.9-3), libjbig0:amd64 (2.1-3.1), gtk-update-icon-cache:amd64 (3.22.6-1), libatk1.0-data:amd64 (2.22.0-1), libharfbuzz0b:amd64 (1.2.7-1+b1), libcairo2:amd64 (1.14.8-1), libavahi-client3:amd64 (0.6.32-1), libpango-1.0-0:amd64 (1.40.3-3), libjpeg62-turbo:amd64 (1:1.5.1-2), libdatrie1:amd64 (0.2.10-4)

That was an example illustrating what I meant with “scrutinizing the software base”. But let's proceed step by step and relive the few hours when I configured our new server.

  1. I first tighten the security of sshd:

on the client:

➜  ~ ssh-keygen -t ed25519
➜  ~ ssh-copy-id -i ~/.ssh/

on the server:

su -
$  vim /etc/ssh/sshd_config
Port XYZ
PermitRootLogin no
ChallengeResponseAuthentication no
PasswordAuthentication no
$ systemctl restart sshd.service

XYZ has to be replaced with a sensible portnumber, of course. ;)

  1. Relieved, I next check which services are running on the system to have an overview:
systemctl --type=service
  1. I then look for services that opened a port and listen on it. I prefer to use
netstat -tulpen

for this purpose,1 but I usually also install 'iftop' and 'iptraf' to have a look at the traffic.

1 Note that you have to install the 'nettools' package on many distributions as 'netstat' is deprecated in favour of the 'ss' command of the iproute2 package since 2011. The 'netstat' output is much more compact and readable, though.

Obviously, it is kind of paradoxical to rely on a local check on a system which might have been already compromised. I thus also use 'nmap' to have a look from outside:

nmap -sS -sU -T4 -A -v
  1. The simple tests above reveal that the server was basically prepared to run an online shop and thus has plenty of services running. Apache, nginx, postfix, dovecot, mysqld, sshd, froxlor, etc. etc. I just stop and remove all of them all except sshd:
systemctl stop <service>
systemctl disable <service>

deinstall them:

apt purge <service>
  1. After that, there are plenty of orphans that I remove with
wajig autoremove
  1. Update:
wajig dailyupgrade
  1. Ugrade:
vim /etc/apt/sources.conf
wajig daily-upgrade
wajig sys-upgrade

This last step may appear questionable, as Debian Testing (currently called Stretch) does not receive the same security support of Stable (currently called Jessie). Well, I definitely prefer Testing for its more up-to-date packages, and I think its more important to avoid packages from the contrib and non-free repositories.

  1. I then check the support status of my installation:

debian-security-support “...identify installed packages for which support has had to be limited or prematurely ended...”


Everything's supported. The more software you have installed, the less likely is this result.

  1. And finally, I search for vulnerabilities (similar to arch-audit above):

debsecan “...generates a list of vulnerabilities which affect a particular Debian installation...“

CVE-2016-2148 busybox (remotely exploitable, high urgency)

What is busybox doing here? Well, its gone.

Update: Damn, I forgot – it's needed for update-initramfs. No big deal, though: what you can remove that easily can as easily be installed again. So don't worry, you won't be able to accidentally remove the kernel or libc. ;)

After these nine steps (the nine hidden secrets for perfect server security!!!), the total size of our installation (disregarding user content in /home and in /var/www) is less than 1.2 GB.

What did I achieve so far? Well, first of all, I have stopped and removed all running services I do not need. That's certainly the most important contribution to server security as all of these services were remotely accessible. Second, I have upgraded the entire installation to a current version of the distribution in the belief that in this version, as a tendency, previous CVEs have already been recognized and fixed. Third, I have identified and removed remaining programs and libraries with security breaches rated as critical.

What can I do more? Can I rate the security of the system somehow, and monitor it?

Yes. Such a rating is, for example, offered by lynis, a security audit system by rkhunter author Michael Boelen, which provides a wealth of helpful information and advice out of the box without the need to configure anything. Great for beginners, useful for advanced users. Recommendable alone for its suggestions concerning the configuring of the ssh server. But beware and don't lock yourself out. ;)

With the current configuration, lynis gives a hardening index of 78%. I'm quite satisfied with that score (you probably won't get a 100% as long as the server is still connected to a network).

How can I make sure that we keep that score? Well, lynis is really very helpful in that respect, since it suggests, depending on the distribution, the installation of several useful tools that help in future security-related decisions.

Many of these tools, however, work best when they are executed by a cronjob in the background and inform the administrator by local mail in case there's anything to report. For this reason, it is imperative for any Linux server installation to include a functional mail transfer agent (MTA) configured for local delivery. In Debian, I always chose exim because its so wonderfully easy to configure it for this case. I'm so much used to this genie on the system, telling me about the good and the bad, that I install an MTA not only on servers, but on every system I administer (although I usually prefer postfix over exim). Here's an example taken today from


When performing system updates on Debian, I additionally like to have the following tools as little helpers in the background. apt-listbugs “retrieves bug reports from the Debian Bug Tracking System and lists them”, apt-listchanges “compares a new version of a package with the one currently installed”, apt-show-versions “shows upgrade options within the specific distribution of the selected package”, checkrestart (part of debian-goodies) “helps to find and restart processes which are using old versions of upgraded files (such as libraries)”, and needrestart “checks which daemons need to be restarted after library upgrades”. I also like logcheck which “helps spot problems and security violations in your logfiles automatically and will send the results to you in e-mail” (see above).

These tools are helpful, but I like to go one step further and have an automated, daily security check. That's exactly what checksecurity does, which, according to Debian, performs “basic system security checks”. Well, how basic depends a lot on the packages installed in addition: recommended are, among others, tiger, which again refers to other packages such as chkrootkit, “searching the local system for signs that it is infected with a 'rootkit'”, as well as file monitoring systems such as tripwire and aide that just make little sense on a rolling-release system. This fact does not diminish the value of checksecurity, of course, which I would very much recommend to install.

You certainly have noticed that I so far did not even mention the security evergreen: the firewall. Well, I do recognize the value of an enterprise-class firewall for a corporate network, but here we are talking about a software running on the very system that we desire to protect. This scenario reminds us of the infamous 'personal firewalls' under Windows, the legendary discussions on nntp:// and fefe's succinct summary:

Do Personal Firewalls improve security? — No.

Why do so many people install them, then? — Because those people are all idiots.

Well, nobody would judge the built-in firewall functionality of Linux equally harshly, and there are even one or two arguments in favor for using it. My view is that this built-in firewall is secondary compared to the measures discussed above, but it certainly doesn't hurt to use it. And that's what I do:

ufw default deny
ufw limit ssh
ufw allow http
ufw allow ...

One final word. Be careful not to overdo things. The more security-related stuff you install, the more messages you will get, and the more dramatic it will all sound. For example, chkrootkit identifies the mosh server instance running on udp port 60001 as an infection when running the bindshell test. That's a trivial false positive, but within the grip of security paranoia, it will be amplified such that it can unbalance even experienced administrators. Be calm, practice Zen, and acquire enough knowledge to immunize yourself against a fullblown security hysteria.

The end of infinality

If you use Archlinux and the infinality bundle from bohoomil's repository: yesterday's update of harfbuzz from 1.3.4-1 to 1.4.1-1 may break important parts of your setup. The reason is that infinality uses an outdated version of freetype2, and at present it seems unlikely whether we will ever see an update:

Reason: Infinality is dead both upstream and with the downstream maintainer bohoomil, and differences with freetype upstream become small as development progresses

For details, see here and here.

Instead of downgrading harfbuzz, I thus reverted to the stock versions of freetype2, fontconfig, and cairo:

pacman -S --asdeps lib32-freetype2 lib-32cairo lib32-fontconfig
pacman -S --asdeps freetype2 cairo fontconfig

The first one only applies if you have the 32-bit multilib-packages installed as required, for example, for steam.

I have not yet replaced the fonts nor is there any immediate need to do so. In fact, the current stock freetype2 seems now to offer an equivalent quality of font rendering as the previous freetype2 with the infinality patchset. Excellent!

I've just checked and found that the situation is even worse for Debian: on my mini (running Stretch/Sid), the installed version of freetype2 is 2.4.9 from 2012. Compared to that, the stock version (2.6.3) can almost be called up-to-date...

apt purge fontconfig-infinality
apt purge libfreetype-infinality6
apt install libfreetype6
Contents © 2017 Cobra - Powered by Nikola