Home Office

The spread of SARS-CoV-2 has made it advisable for many people to work from home. I and my colleagues are doing that now for four weeks, and it's working very well. For me, home office isn't new: I use this possibility since a decade whenever I have a task at hand requiring particular concentration and focus. Writing papers or proposals is such a task, or developing and implementing a quantitative model to understand experimental data (that's what lucky physicists do for a living). In fact, I've been asked in January by colleagues to help with the development of such a model, which I thought to be challenging, but didn't expect to be as difficult as it actually turned out to be. For most of the time, I was rather cluelessly poking around in a forest of equations and not getting anywhere.

During the last days, I made an effort to refocus on this issue, and not only for a few hours, but a couple of days: you go to bed with the problem and wake up with it, and there's nothing to distract you from it. This kind of total concentration is simply not possible in the daily office routine, but I can do it at home, basically returning to my time as a student where every living moment was devoted to problem solving. What greatly helps with reaching this trance-like state is having no kids, an understanding wife, and softly purring cats that love to sleep in the chairs on my left and right. The breakthrough occurred after two days, all of a sudden, like a flash. I still had to solve technical problems, but the direction was clear. These are the moments that every scientist cherishes and holds most dear: the intense joy to have solved the problem, to have broken the code. 😌

I realize that not everybody has the same favorable boundary conditions as I do, or even the luxury to compare. And I understand that the situation is very different with young kids instead of cats. 😉 But still, I'm really tired reading commentaries in the newspapers moaning about the “solitary confinement”, and how unbearable it is. Most of them stem from rather young people with a smartphone glued to their right hand, and the strong belief to have the god-given right to party. Even worse are the characters with a political agenda, bitterly complaining about violations of our constitutional rights and predicting the end of democracy. What unites these two apparently very different groups is their failure to understand even the most simple arithmetic. And yes, there's no need for calculus to understand the simple concept of the exponential spread of a virus.

More realistic models are based on systems of differential equations similar to the ones describing the zombie apocalypse. The infection rate of the human population depends on the infection probability when a human and a zombie meet. Similarly, in epidemiology, the spread of an infectious disease is characterized by R0, the basic reproduction number. This number determines how fast the infection spreads (i.e., the slope of the exponential), and if it decreases with time (which is highly desirable), the curve “flattens”. The curve remains, however, an exponential as long as R0 > 1.

The greatest shortcoming of the human race is our inability to understand the exponential function. A. A. Bartlett

makepkg.conf

In a previous post, I've remarked:

“If the distributor commands over virtually unlimited resources, and compression speed is thus not an issue, brotli and zstd are clearly superior to all other choices. That's how we would like to have our updates: small and fast to decompress.”

And not even a year later, we get this announcement. My own measurements indicated a factor of 8 increase in decompression speed, but the Arch team even sees a factor of 14. Great! ☺

There are also a few settings in /etc/makepkg.conf that may greatly accelerate the installation of packages from the AUR. All details can be found in the Arch Wiki, but here are the modifications I'm using in the order of appearance:

# building optimized binaries
CFLAGS="-march=native -O2 -pipe -fstack-protector-strong -fno-plt"
CXXFLAGS="${CFLAGS}"
# use all cores for compiling
MAKEFLAGS="-j8"
# compile in RAM disk
BUILDDIR=/tmp/makepkg
# use package cache of pacman
PKGDEST=/bam/cache/pacman/pkg
# enable multicore compression for the algorithms supporting it
COMPRESSGZ=(pigz -c -f -n)
COMPRESSBZ2=(lbzip2 -c -f)
COMPRESSXZ=(xz -c -z - --threads=0)
COMPRESSZST=(zstd -c -z -q - --threads=0)
# use lz4 as default package format (the command line lz4 does not yet support multi-threading, but it's still faster than anything else)
PKGEXT='.pkg.tar.lz4'

I didn't perform any systematic measurements, but some AUR packages seem to install in seconds, when it took minutes in the default configuration. YMMV, but it's worth to give it a try.

The five dimensions of heat

Most people love to travel. I don't. First of all, I dislike the modern way of transportation that reduces travel to a mere logistic problem, namely, to the one transporting human cargo with the minimum cost. Second, I'm a creature of habits, and traveling inevitably interferes with them. And third, away from my natural habitat I miss the native diet of a chilivore.

It's not that I want every dish to be hot as hell, but what I can't stand is the styrofoam taste of the stuff one gets for food in airplanes and related places. Heat doesn't refer here to the physical quantity, but the sensation experienced when consuming certain spices, which is commonly also called spiciness, hotness, or pungency. This quality is usually only associated with Chili, but that's a fairly one-dimensional view.

Chili

../images/chili_scaled.jpeg

The food of the true revolutionary is the red pepper, and he who cannot endure red peppers is also unable to fight.” Said Mao Zedong (毛澤東), who was born in the Hunan province in China which is home of one of the eight great traditions of Chinese cuisine well known for its liberal use of hot chilis.

The active substance in all chili peppers is capsaicin. The Scoville scale provides a measure of the amount of capsaicin in a given plant and ranges from 0 to 16,000,000 Scoville heat units (SHU). The hottest Chili on earth is currently the Carolina Reaper with a breathtaking value of 1,569,300 SHU (as a reference: the spiciness of the original red Tabasco is not higher than 3,500 SHU). Now, these kind of hyper-hot chilis are mainly a fetish for chili heads (like myself) and are commercially valuable only as tourist attraction (see here). Nobody in his right mind would use such a designer chili for actual food. The hottest variety I've seen in authentic indigenous food is the bird's eye chili, which is particularly popular in Thai food, and which scores around 100,000 SHU. I have not seen anything substantially hotter in Mexico, but I have never been in Yucatán, where people eat reportedly Habaneros (up to 350,000 SHU) for breakfast.

Black pepper and ginger

Both are known for adding flavor rather than heat. But what is not commonly known is that both of these spices contain substances that are chemical relatives of capsaicin, have an analogous effect and can be measured on the same scale. The active substance in black pepper, for example, is called piperine and scores 100,000 SHU. Not much behind is gingerol in Ginger with 60,000 SHU. Even considering that the actual plants contain only a few percent of these active substances, they can be surprisingly hot when used liberally.

../images/ginger_scaled.jpeg

Mustard/Horseradish/Wasabi

../images/mustard_seeds_scaled.jpeg

A completely different kind of hotness is produced by a substance called allyl isothiocyanate, which is contained in mustard seeds as well as in horseradish and wasabi, and affects the nose rather than the throat. Personally, I don't like the popular chili mustards that attempt to combine the distinct types of heat offered by chili and mustard, but many people just love it for barbecue. Furthermore, many popular curry powder recipes combine mustard seeds with chili. In any case, mustard seeds are a part of human food everywhere in the world.

Sichuan Pepper

../images/szechuan_vs_black-pepper.jpeg

Has nothing to do with any other pepper (particularly with the very similar looking black pepper as you can see above). Contains hydroxy α-sanshool, which has a unique effect unlike any experienced with ordinary pepper, chili, or mustard. In the words of Harold McGee in his book On Food and Cooking, the sanshools “produce a strange, tingling, buzzing, numbing sensation that is something like the effect of carbonated drinks or of a mild electric current (touching the terminals of a nine-volt battery to the tongue).” Used, not surprisingly, mostly in Sichian (四川菜) food such as mapo doufo (麻婆豆腐) – my absolute favor and IMHO the undisputed crown of Chinese cuisines.

Garlic/Onions

Raw onions and even more so raw garlic develop an intense heat due to the substance allicin. When consumed in large quantities, the effect can be rather overwhelming. To give you at least an idea of what I'm talking about, have a look at the main vegetable side dish you get when ordering the traditional bulgogi (불고기) in Korea:

../images/garlic_in_korea_scaled.jpeg

The meat coming with this friendly offer is marinated with a paste containing, among other delicious ingredients, loads of garlic. 😆 And the other side dishes consist in a green onion salad and (rather mild) chili peppers. 😵 After enjoying this course, I wasn't surprised hearing that onions and garlic have their own pungency standard, namely, the pyruvate scale.

It requires only these few basic ingredients to create the infinite variety of spicy food all around the world, from Thailand to the Caribbean islands, from Mexico to Ethiopia, from Cajun country to Korea. Sometimes, we may find the taste disagreeable or too extreme, like the eternal wasabi and ginger in Japan, and the endless garlic and onions in Korea. Don't be shy, vocalize your needs. I got hot chilis in Japan and ginger in Korea just by asking.

A spicy new year to all of you! 😊

Upgrade to Windows 10

My wife's gaming rig is still running on Windows 7, and it's about time to change that. In principle, an upgrade to Windows 10 requires only the download of the Media Creation Tool from Microsoft on the system to be upgraded. However, to save myself the download of several GB of data with my meager DSL connection at home, I've used this tool in my virtual Windows 7 at the office to prepare a USB stick for installation. It took some time to find an active download link for the USB 3.0 drivers compatible with VirtualBox (Intel 7 Series C216 Chipset Family, which Intel has discontinued), but in the end I had my stick.

After uninstalling the antivirus scanner (Avast), I've plugged in the stick, clicked on setup.exe, and off it went. But after 15% installation progress:

Error: 0x8007025D-0x2000C
The installation failed in the SAFE_OS phase with an error during APPLY_IMAGE operation.

Just below the error message is a link that can neither be clicked nor copied. Now that's usability! In any case, the suggestions on this page aren't helpful at all, but send the users experiencing this error message on the wrong track. Fortunately, third-party pages such as techjourney and the windows club do much better in this respect, in that they have the most likely reason on top of their list: corrupted installation media. And in fact, when I simply let the Media Creation Tool download the files, the upgrade works flawlessly. Didn't even take 3 h including the download, which was much faster then I had expected.

You see how easy the upgrade is – even for someone who hasn't actively used Windows since 15 years. Don't be one of these pathetic figures that are eternally whining and bawling that they have a god-given right to use Windows XYZ until the end of the time, and are very loudly expressing the opinion that Microsoft must be condemned by international (or at least European) law to keep the OS in question alive. Get a grip on yourself, make an update, and deal with it, for Pete's sake. Or switch to OpenBSD or any other one of these geekish systems. You could also buy a Mac, if you insist. But don't act like a newborn.

For us, Windows 10 itself is not entirely new, since earlier this year, we purchased a Lenovo Miix 630 for accompanying my wife on her trip to Japan. We got this 'Windows on ARM' detachable for €444 complete with a back-lit type cover and a pen, 8 GB of RAM, and LTE, allowing her to access the internet from home without the need to search for places offering public wifi. The Miix turned out to be very versatile and fun to use, and it has an almost unbelievable battery life in excess of 20 h thanks to its Snapdragon 835 processor (a mid-range smartphone SOC). What I also like is the rolling-release concept of Windows 10, which guarantees that the device isn't obsolete after at most three years as it's custom for Android gadgets. It's a pity that this interesting concept is so unpopular. Lenovo has already stopped the production of the Miix, and there aren't any others like it (the Surface Pro X from Microsoft comes at more than three times the price).

Unbound Plus

As detailed in a previous post, I'm running Unbound as local DNS resolver. I've configured it to use DNS over TLS, and while things were a little shaky with the few available servers supporting this security protocol in the beginning, I didn't need to switch back to plain DNS for at least a year. And I'm not using the commercial providers that have decided to jump on the bandwagon, namely, Google (8.8.8.8), Cloudflare (1.1.1.1), and Quad9 (9.9.9.9). I wouldn't touch them except for testing purposes.

However, my initial motive to run a local resolver was not security or privacy, but latency, and DNS over TLS, being based on TCP instead of UDP as plain DNS, definitely doesn't help with that. In fact, unencrypted queries over UDP are generally way faster than encrypted ones over TCP, but the actual latency can strongly vary depending on the server queried.

Dnsperf is the standard tool for measuring the performance of an authoritative DNS server, but it doesn't support TCP, and the patched version is seriously outdated. Flamethrower is a brand-new alternative looking very promising, but I've got inconsistent results from it (I'm pretty sure that was entirely my fault).

The standard DNS query tools dig (part of bind) and drill (part of ldns) don't support TLS, but kdig (part of knot) supposedly does. An alternative is pydig, which I've used already two years ago to check if an authoritative server offers DNS over TLS, and which turned out to be just as helpful in determining the latency of a list of DNS servers (one IP per line). After updating ('git pull origin master'), I've fed this list (called, let's say, dns-servers.txt) to pydig using

while read p; do ./pydig @$p +dnssec +tls=auth ix.de | grep 'TLS response' | awk '{print substr($0, index($0, $10))}'; done < dns-servers.txt

with an explicit (+) requirement for DNSSEC and TLS (or without for plain DNS).

I've got a few really interesting results this way. For example, Cloudflare is invariably the fastest service available, with a latency of 9 and 60 ms for plain and encrypted UDP queries here at home, respectively. From pdes-net.org, the situation is different: Cloudflare takes 4 and 20 ms, while dnswarden returns results within 1 and 9 ms, respectively. Insanely fast!

This latter service (where the hell did it come from all of a sudden?) is also very competitive compared to Google or Quad9 here at home: all of them require about a 100 ms to answer TLS requests. That seems terribly slow, but it's not as bad as it sounds. First, I've configured Unbound to be a caching resolver, so many, if not most, requests are answered with virtually zero latency. Second, I minimize external requests by making the root zone local – which is also known as the hyperlocal concept.

Due to this added functionality, I've found it necessary to revamp the configuration. All main and auxiliary configuration files of my current Unbound installation are attached below.

Main configuration files

/etc/unbound/...

.../unbound.conf

include: "/etc/unbound/unbound.conf.d/*.conf"

.../unbound.conf.d/01_Basic.conf

server:
        verbosity: 1
        do-ip4: yes
        do-ip6: yes
        do-udp: yes
        do-tcp: yes

        use-syslog: yes
        do-daemonize: no
        username: "unbound"
        directory: "/etc/unbound"

        root-hints: root.hints

        #trust-anchor-file: trusted-key.key
        auto-trust-anchor-file: trusted-key.key

        hide-identity: yes
        hide-version: yes
        harden-glue: yes
        harden-dnssec-stripped: yes
        use-caps-for-id: yes

        minimal-responses: yes
        prefetch: yes
        qname-minimisation: yes
        rrset-roundrobin: yes
        use-caps-for-id: yes

        ## reduce edns packet size to help big udp packets over dumb firewalls
        #edns-buffer-size: 1232
        #max-udp-size: 1232

        cache-min-ttl: 3600
        cache-max-ttl: 604800

        include: /etc/unbound/adservers

.../unbound.conf.d/02_Forward.conf

server:
          interface: ::0
          interface: 0.0.0.0
          access-control: ::1 allow
          access-control: 2001:DB8:: allow
          #access-control: fd00:aaaa:bbbb::/64 allow
          access-control: 192.168.178.0/16 allow
          verbosity: 1
          ssl-upstream: yes

forward-zone:
# forward-addr format must be ip "@" port number "#" followed by the valid public
# hostname in order for unbound to use the tls-cert-bundle to validate the dns
# server certificate.
          name: "."
          # Servers support DNS over TLS, DNSSEC, and (partly) QNAME minimization
          # see https://dnsprivacy.org/jenkins/job/dnsprivacy-monitoring/

           ### commercial servers for tests
                #forward-addr: 1.1.1.1@853                      #cloudflare-dns.com
                #forward-addr: 8.8.8.8@853                      #dns.google
                #forward-addr: 9.9.9.9@853                      #dns.quad9.net

                ### fully functional (ordered by performance)
                forward-addr: 116.203.70.156@853                #dot1.dnswarden.com
                forward-addr: 116.203.35.255@853                #dot2.dnswarden.com
                #forward-addr: 185.49.141.37@853                #getdnsapi.net
                #forward-addr: 185.95.218.43@853                #dns.digitale-gesellschaft.ch
                #forward-addr: 146.185.167.43@853               #dot.securedns.eu
                #forward-addr: 145.100.185.15@853               #dnsovertls.sinodun.com
                #forward-addr: 145.100.185.16@853               #dnsovertls1.sinodun.com
                #forward-addr: 46.182.19.48@853                 #dns2.digitalcourage.de
                #forward-addr: 80.241.218.68@853                #fdns1.dismail.de
                #forward-addr: 89.233.43.71@853                 #unicast.censurfridns.dk

                ### temporarily (2019/11/05) or permanently broken
                #forward-addr: 145.100.185.17@853               #dnsovertls2.sinodun.com
                #forward-addr: 145.100.185.18@853               #dnsovertls3.sinodun.com
                #forward-addr: 158.64.1.29@853                  #kaitain.restena.lu
                #forward-addr: 199.58.81.218@853                #dns.cmrg.net
                ##forward-addr: 81.187.221.24@853               #dns-tls.bitwiseshift.net
                ##forward-addr: 94.130.110.185@853              #ns1.dnsprivacy.at
                ##forward-addr: 94.130.110.178@853              #ns2.dnsprivacy.at
                #forward-addr: 89.234.186.112@853               #dns.neutopia.org

.../unbound.conf.d/03_Performance.conf

# https://www.unbound.net/documentation/howto_optimise.html
server:
                # use all cores
                num-threads: 8

                # power of 2 close to num-threads
                msg-cache-slabs: 8
                rrset-cache-slabs: 8
                infra-cache-slabs: 8
                key-cache-slabs: 8

                # more cache memory, rrset=msg*2
                rrset-cache-size: 200m
                msg-cache-size: 100m

                # more outgoing connections
                # depends on number of cores: 1024/cores - 50
                outgoing-range: 100

                # Larger socket buffer.  OS may need config.
                so-rcvbuf: 8m
                so-sndbuf: 8m

                # Faster UDP with multithreading (only on Linux).
                so-reuseport: yes

.../unbound.conf.d/04_Rootzone.conf

# “Hyperlocal“ configuration.
# see https://forum.turris.cz/t/undbound-rfc7706-hyperlocal-concept/8761
# furthermore
# https://forum.kuketz-blog.de/viewtopic.php?f=42&t=3067
# https://tools.ietf.org/html/rfc7706#appendix-A
# https://tools.ietf.org/html/rfc7706#appendix-B.1
# https://www.iana.org/domains/root/servers

auth-zone:
 name: .
 for-downstream: no
 for-upstream: yes
 fallback-enabled: yes
  #master: 198.41.0.4                   # a.root-servers.net
 master: 199.9.14.201                   # b.root-servers.net
 master: 192.33.4.12                    # c.root-servers.net
  #master: 199.7.91.13                  # d.root-servers.net
  #master: 192.203.230.10               # e.root-servers.net
 master: 192.5.5.241                    # f.root-servers.net
 master: 192.112.36.4                   # g.root-servers.net
  #master: 198.97.190.53                # h.root-servers.net
  #master: 192.36.148.17                # i.root-servers.net
  #master: 192.58.128.30                # j.root-servers.net
 master: 193.0.14.129                   # k.root-servers.net
  #master: 199.7.83.42                  # l.root-servers.net
  #master: 202.12.27.33                 # m.root-servers.net
 master: 192.0.47.132                   # xfr.cjr.dns.icann.org
 master: 192.0.32.132                   # xfr.lax.dns.icann.org

 zonefile: "root.zone"

Auxiliary configuration files

/etc/cron.weekly/...

.../adserver_updates

#!/bin/bash
# Updating Unbound resources.
# Place this into e.g. /etc/cron.weekly

###[ adservers ]###

curl -sS -L --compressed -o /etc/unbound/adservers.new "`https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext <https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext>`_"

if ` $? -eq 0  <>`_; then
  mv /etc/unbound/adservers /etc/unbound/adservers.bak
  mv /etc/unbound/adservers.new /etc/unbound/adservers
  unbound-checkconf >/dev/null
  if ` $? -eq 0  <>`_; then
        rm /etc/unbound/adservers.bak
        systemctl restart unbound.service
  else
        echo "Warning: Errors in newly downloaded adservers file probably due to incomplete download:"
        unbound-checkconf
        mv /etc/unbound/adservers /etc/unbound/adservers.new
        mv /etc/unbound/adservers.bak /etc/unbound/adservers
  fi
else
  echo "Download of unbound adservers failed!"
fi

.../roothint_updates

#!/bin/bash
# Updating Unbound resources.
# Place this into e.g. /etc/cron.weekly


###[ root.hints ]###

curl -sS -L --compressed -o /etc/unbound/root.hints.new `https://www.internic.net/domain/named.cache <https://www.internic.net/domain/named.cache>`_

if ` $? -eq 0  <>`_; then
  mv /etc/unbound/root.hints /etc/unbound/root.hints.bak
  mv /etc/unbound/root.hints.new /etc/unbound/root.hints
  unbound-checkconf >/dev/null
  if ` $? -eq 0  <>`_; then
        rm /etc/unbound/root.hints.bak
        systemctl restart unbound.service
  else
        echo "Warning: Errors in newly downloaded root.hints file probably due to incomplete download:"
        unbound-checkconf
        mv /etc/unbound/root.hints /etc/unbound/root.hints.new
        mv /etc/unbound/root.hints.bak /etc/unbound/root.hints
  fi
else
  echo "Download of unbound root.hints failed!"
fi

/etc/systemd/system/unbound.service.d

I've discarded my custom snippet for systemd to get the DNS anchor. Archlinux does provide the anchor automatically as a dependency of unbound (dnssec-anchors), so why complicate things. For other distributions, however, the snippet may still be useful, so here it is:

[Service]
ExecStartPre=sudo -u /usr/bin/unbound-anchor -a /etc/unbound/trusted-key.key

山葵 (Wasabia japonica)

My wife had to attend to urgent family matters and went home for a few weeks. When she asked me if there's anything I'd like her to bring back, well, of course: Wasabi! Now, most of you will have been already in Sushi shops or Japanese restaurants, and you thus may believe that you know what I'm talking about. You don't.

Personally, I've been served genuine wasabi only in two places in Japan, one in Osaka, one in Tokyo, both places I normally wouldn't even dream to visit since I don't want to spent my monthly income in one evening. But that's where I've learned what wasabi actually is – not the colored horse radish one gets almost everywhere, even in Japan (and certainly in Berlin), but one of the most delicious and stimulating spices and condiments I've ever had the pleasure to experience.

My wife bought a small little root as well as a おろし金 (shark-shin oroshigane), since wasabi is enjoyable only when being very finely grated. But when arriving at the airport, she was held back by authorities, since one cannot possibly bring the national treasures of Japan abroad without registering. 😱

Well, after filling out a phytosanitary certificate and getting it officially stamped, she was allowed to enter the plane to Helsinki. 😌

We are now having dinner and are enjoying the fresh wasabi together with good bread, butter, and smoked salmon (and beer). 😋 美味しい (Oishii)! 乾杯 (Kampai)!

../images/wasabi_scaled.jpeg

InspIRCd 3

All of a sudden, the PdeS IRC channel wasn't working anymore. As inexplicable as this sudden disruption first appeared to be, as obvious are the reasons in hindsight. What has happened?

At August 18, apt offered an InspIRCd update, dutifully asking whether I wanted to keep the configuration files. I didn't realize at this moment that the update was in fact the upgrade from version 2 to 3 I had been waiting for since May. As a matter of fact, this update is disruptive and requires one to carefully review and modify the configuration of InspIRCd. Well, I failed to do that, and I also failed to notice that the InspIRCd service didn't restart after the update.

Sometimes people jokingly remark that I should work as a system or network admin rather than as a scientist. This incident shows that I'm not qualified for such a job. I'm way too careless.

In any case, I now had to find the reason for the InspIRCd service to quit. It wasn't too difficult, but a multi-step procedure. The first obstacle was an outdated apparmor profile, which allowed InspIRCd to write in /run, but not in /run/inspircd. That was easily fixed.

The second was the TLS configuration of our channel. I took the opportunity to renew our certificate and to altogether strengthen the security of the channel, but it took me a while to realize that the identifier in the bind_ssl and sslprofile_name tags has to be one and the same (it isn't in the documentation!).

<bind
          address=""
          port="6697"
          type="clients"
          ssl="pdes">

<module name="ssl_gnutls">

<sslprofile
          name="pdes"
          provider="gnutls"
          certfile="cert/cert.pem"
          keyfile="cert/key.pem"
          dhfile="cert/dhparams.pem"
          mindhbits="4096"
          outrecsize="4096"
          hash="sha512"
          requestclientcert="no"
          priority="PFS:+SECURE256:+SECURE128:-VERS-ALL:+VERS-TLS1.3">

Well, the channel is up again, more secure than ever. Fire away. 😅

Opposite extremes

I have a CentOS virtual machine because I had to install it on a compute server in my office, and I keep it since it's such an interesting antithesis to the rolling release distribution I prefer for my daily computing environment. CentOS is by a large margin the most conservative of all Linux distributions, and it's sometime useful for me to have access to older software in its natural habitat. Just look at this table comparing the versions of some of the major packages on fully updated Arch and CentOS 7 installations:

                Current                         Arch                            CentOS
linux           5.1.15                          5.1.15                          3.10
libc            2.29                            2.29                            2.17
gcc             9.1.0                           9.1.0                           4.8.5
systemd         242                             242                             219
bash            5.0                             5.0                             4.2
openssh         8.0p1                           8.0p1                           7.4p1
python          3.7.3                           3.7.3                           2.7.5
perl            5.30.2                          5.30.2                          5.16.3
texlive         2019                            2019                            2012
vim             8.1                             8.1                             7.4
xorg            1.20.5                          1.20.5                          1.20.1
firefox         67.0.1                          67.0.1                          60.2.2
chromium        75.0.3770.100                   75.0.3770.100                   73.0.3683.86

You can easily see why I prefer Arch over CentOS as a desktop system.

But CentOS has it's merits, particularly for servers. There's no other distribution (except, of course, its commercial sibling RHEL) with a longer support span: CentOS 7 was released in July 2014 and is supported till end of June 2024. And that's not just a partial support as for the so-called LTS versions of Ubuntu.

Now, I've noticed that CentOS keeps old kernels after updates, befitting its highly conservative attitude. However, in view of the very limited hard disk space I typically give my virtual machines (8 GB), I got a bit nervous when I saw that kernels really seemed to pile up after a few updates. There were five of them! Turned out that's the default, giving the word “careful” an entirely new meaning.

But am I supposed to remove some of these kernels manually? No. I was glad to find that the RHEL developers had already recognized the need for a more robust solution:

yum install yum-utils
package-cleanup --oldkernels --count=3

And to make this limit permanent, I just had to edit /etc/yum.conf and set

installonly_limit=3

Well thought out. 😉

What you don't want to use, revisited

A decade ago, I advised my readers to stay away from OpenOffice for the preparation of professional presentations, primarily because of the poor support of vector graphics formats at that time. In view of the difficulties we have recently encountered when working with collaborators on the same document with different Office versions, I was now setting great hopes in LibreOffice for the preparation of our next project proposal. First of all, I thought that using platform-independent open source software, it should be straightforward to guarantee that all collaborators are using the same version of the software. Second, the support for SVG has been much improved in recent versions (>6) of LibreOffice, and I believed that we finally should be able to import vector graphics directly from Inkscape into an Office document. Third, the TexMaths extension allows one to use LaTeX for typesetting equations and to insert them as SVG, promising a much improved math rendering at a fraction of the time needed to enter it compared to the native equation editor. Fourth, Mendeley offers a citation plugin for LibreOffice, which I hoped would make the management of the bibliography and inserting citations as simple as with BibTeX in a LaTeX document.

Well, all of these hopes were in vain. What we (I) had chosen for preparing the proposal (the latest LibreOffice, TexMaths extension, and Mendeley plugin) proved to be one of the buggiest software combos of all times.

ad (i): Not the fault of the software, but still kind of sobering: our external collaborator declared that he had never heard about LibreOffice, and that he wouldn't know how to install it. Well, we thought, now only two people have to stay compatible to each other. We installed the same version of LibreOffice (first Still, than Fresh), I on Linux, he on Windows. But the different operating systems probably had little to do with what followed.

ad (ii): I was responsible for all display items in the proposal, and I've used a combination of Mathematica, Python, Gimp, and Inkscape to create the seven figures contained in it. The final SVG, however, was always generated by Inkscape. I've experienced two serious problems with these figures. First, certain line art elements such as arrows were simply not shown in LibreOffice or in PDFs created by it. Second, the figures tended to “disappear”: when trying to move one of them, another would suddenly be invisible. The caption numbering showed that they were still part of the document, and simply inserting them again messed up the numbering. We've managed to find one of these hidden figures in the nowhere between two pages (like being trapped between dimensions 😱), but others stayed mysteriously hidden. We had to go back to the previous version to resolve these issues, and in the end I converted all figures to bitmaps. D'Oh!

ad (iii): I wrote a large part of my text in one session and inserted all symbols and equations using TeXMaths. Worked perfectly, and after saving the document, I went home, quite satisfied with my achievements this day. When I tried to continue the next day, LibreOffice told me the document is corrupted, and was subsequently unable to open it. I finally managed to open it with TextMaker, which didn't complain, but also didn't show any of the equations I had inserted the day before. Well, I saved the document anyway to at least restore the text. Opening the file saved by TextMaker with Writer worked, and even all symbols and equations showed up as SVG graphics, but without the possibility to edit them by TeXMaths.

ad (iv): Since my colleague had previously used the Mendeley plugin for Word, it was him who had the task to insert our various references (initially about 40). That seemed to work very well, although he found the plugin irritatingly slow (40 references take something like a minute to process). However, when he tried to enter additional references a few days later, Mendeley claimed that the previous one were edited manually, displayed a dialogue asking whether we would like to keep this manual edit or disregard it. Regardless the choice, the previous citations were now generated twice. And with any further citation, twice more, so that after adding three more citations, [1] became [1][1][1][1][1][1][1][1]. The plugin also took proportionally longer for processing the file, so in the last example, it took about 10 min. Well, we went one version back. But what worked so nicely the day before was now inexplicably broken. It turned out that a simple sync of Mendeley (which is carried out automatically when you start this software) can be sufficient for triggering this behavior. We finally inserted the last references manually, overriding and actually irreversibly damaging the links between the citations and the bibliography.

In the final stages, working on the proposal felt like skating on atomically thin ice (Icen 😎). We always expected the worst, and instead of concentrating on the content, we treated the document like a piece of prehistoric art which could be damaged by anything, including just viewing the document on the screen. That feeling was very distracting. I would have loved to correct my position, really, but LibreOffice in its present state is clearly no alternative to LaTeX for preparing the documents and presentations required in my professional environment. I will check again in another ten years. 😉

In principle, I would have no problem with being solely responsible for the document if I could use LaTeX and would get the contribution from the collaborators simply as plain text. It is them having a problem with that, since they don't know what plain text is. In this context, I increasingly understand the trend to collaborative software: it's not that people really work at the same time, simultaneously, on a document, but it's the fact that people work on it with the guaranteed same software which counts.