# Hidden settings

Since I've quit KDE, and desktop environments in general, I'm primarily using applications developed for LXDE, XFCE, and Gnome. I have no problems with the former two, but I keep having conceptual difficulties with the ideologically driven minimalism of the latter. Take file managers and their associated terminal, for example. In PCManFM (LXDE), for example, one can change the associated terminal as expected in the program's preferences. Same in Thunar (XFCE). But of course not in Nautilus (Gnome) or any of its forks such as, for example, Nemo (Cinnamon). All of these “nonessential” settings have been dispelled to dconf, the ‘registry’ of Gnome, which can be accessed either with the dconf-editor or the command-line tool gsettings. For example, the following command changes the terminal associated with nemo to sakura:

gsettings set org.cinnamon.desktop.default-applications.terminal exec 'sakura'

I hope to remember that the next time instead of spending an eternity in the preferences again.

# Better annotations

A significant part of my daily work consists of critically reading drafts of publications or project proposals. I usually place hand-written comments on a printout of the respective document and discuss them with the author in my office, but that isn't such a good idea in the time of SARS-CoV-2. We hold these discussions now in video meetings, with the document in question being looked at together by sharing someones screen showing an annotated pdf. Now, I'm using evince to annotate pdfs, and didn't like the fact that all annotations seem to come from ‘Unknown’. In principle, that can be changed by editing the author in the annotation's properties, but I certainly would not have enjoyed doing that for each of the 80+ comments I had made for the present manuscript.

Alas, the official help told me that setting a different default author would not be possible. And that seemed final, since it came from the most authoritative source – the developers themselves. But I finally found a surprisingly simple solution in the place where, at this time, I had expected it least: the ArchWiki.. Shouldn't the developers know that evince looks into /etc/passwd? In any case, a simple

usermod -c “Deus ex machina” cobra

ensured that my comments would be now easily distinguishable from those of the other coauthors.

# Fast, faster, M1?

I should have created a category called “Modern advertisement techniques” or “How-the-media-manipulate-us-to-help-Apple-selling-its-products”. The crude attempts of the tabloid press are so glaringly obvious to anybody that they are more amusing than anything else. What I find far more disconcerting is the subtle approach one encounters in media of higher standard. At first the occasional inaccuracy or omission seems innocuous enough, but after a while it becomes clear that all of these apparent oversights and mishaps are invariably in favor of Apple, establishing an act of framing, deliberate or not. Here is one and here is another example of what I'm talking about.

Now, in June Apple announced that they are going to switch from Intel to ARM, and in November they announced the Apple M1 system-on-chip with their usual vastly exaggerated grandiose claims (3×, 6×, 15× faster!!!). One doesn't have to be the oracle of Delphi to predict the resulting media circus and the ever higher flying expectations based on nothing but hype.

Finally, some meaningful benchmarks (Cinebench R23) in c't 26/2020. Before examining and evaluating the numbers, here are two quotes from this issue:

p. 36 (Bit-Rauschen) Beim 15-W-Typ Ryzen 5000U wird es jedenfalls spannend, ob er zumindest bei Multithreading Apples-ARM-Renner M1 einholt und somit die x86-Ehre rettet.

p. 44 (Alles M1!) In der Single-Core-Performance enteilt der aktiv gekühlte M1 [...] allen bisherigen Mobilprozessoren der 15- bis 45-Watt-Klasse [...]. In der Multi-Core-Wertung sortiert er sich zwischen den 45-Watt-Mobil-CPUs Core i7-10750H (6300 Punkte) und Ryzen5 4600H (8370 Punkte) ein [...].

Now, these statements very clearly imply that the performance of the M1 surpasses that of any currently available 15-W mobile processor, wouldn't you say so?

Let's compare the single/multithreaded Cinebench R23 scores [1] of Intel's TigerLake top model and three Ryzen 7 4000U with those of the M1:

i7-1185G7               1538/6264
4700U                   1184(1218)/6874(7269)
M1                      1514(1517)/7760(7786)
4750U                   1184/8088
4800U                   1235/10156

As you can see, Apple has come up with a highly competitive chip offering a single-core performance on par with the 1185G7, and a multi-core performance just between the 4700U and the 4750U. But neither do we need a 45-W-CPU, nor the upcoming Ryzen 5000U to leave the M1 far behind in terms of multi-core performance: the 4800U does that well enough. And that's what I would have liked to read in an objective summary of the M1 instead of the distorted statements above.

The M1 is the first ARM-based processor that offers competitive performance for desktop applications. Is that the end of Intel and AMD? The loss of Apple as a major client may seem like an enormous loss for Intel, but actually it's a rather insignificant one, and some even believe it to be beneficial for them. Moreover, the M1 currently only runs on Apple hard- and software, resulting in a correspondingly small market share. More alarming, particularly for AMD with their hopes to break the dominance of Intel processors in data center and high-performance computing applications, is the current development of ARM-based server processors offering high performance for an affordable price. Remember when Linux-based x86 boxes replaced SPARC, MIPS, PA-RISC, and Power PC workstations running Solaris, IRIX, HP-UX, and AIX? That's just 11 years ago. Perhaps we are witnessing an analogous transition right now.

[1] A very similar comparison can be found, by the way, in the current issue of c't (Apfel-Alternativen, c't 1/2021, p.109). The values here are taken from cpu-monkey.com, and the ones in parentheses are from c't. If you have an older processor and would like to compare, chances are good that you find it in the comprehensive, community-compiled list at computerbase.de.

There are many examples of companies that didn't last very long after having been acquired by IBM. In a process called blue washing, IBM replaces the processes, corporate culture and philosophy of the new acquisition with those of its own. This routine inevitably leads to the complete assimilation and absorption of the company's original identity until nothing is left of it.

It is thus no surprise that IBM's spectacular takeover of Redhat in 2018 was met with considerable scepticism and sometimes outright concern. And rightly so: a few days ago Redhat announced the end of CentOS as we know it. Ironically, those who recently upgraded from CentOS 7 to CentOS 8 to get support until 2029 instead of 2024 now have only one year left.

CentOS Stream is not a viable replacement for the most common use case of CentOS, namely, running software suites certified for Redhat as I've described previously on this blog (part I, part II). New distributions filling the gap that CentOS will leave have been announced (Rocky Linux, Project Lenix), and we will see if and when they materialize, and how long they last. In the meantime, I'm glad that I generally avoided Linux distributions with a commercial background.

# Keep Alive

I'm so much used to mosh that I'm always surprised by how fast a plain ssh connection runs into a timeout. Or worse into a kind of half-terminated hanging connection with an apparently unresponsive terminal.

Which is weird, since there's already one measure in place that is intended to avoid this situation: TCPKeepAlive, which is enabled by default. To quote the man page of sshd_config:

On the other hand, if TCP keepalives are not sent, sessions may hang indefinitely on the server, leaving "ghost" users and consuming server resources. The default is yes (to send TCP keepalive messages), and the server will notice if the network goes down or the client host crashes. This avoids infinitely hanging sessions.

What irritates users most in this situation is the unresponsive terminal, which seems to no longer accept any commands, and won't close unless the ssh connection is terminated by killing the process. But there's no need to kill, as ssh offers several escape sequences that also take care of this case:

 ~.  - terminate connection (and any multiplexed sessions)
~B  - send a BREAK to the remote system
~C  - open a command line
~R  - Request rekey (SSH protocol 2 only)
~^Z - suspend ssh
~#  - list forwarded connections
~&  - background ssh (when waiting for connections to terminate)
~?  - this message
~~  - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)

To prevent this thing to ever happen again, two options can be set either on the server or the client.

On the server side, one can set the following options in /etc/ssh/sshd_config:

TCPKeepAlive no (default yes)
ClientAliveInterval 30
ClientAliveCountMax 240

To quote again the man page of sshd_config:

ClientAliveInterval Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client.

ClientAliveCountMax Sets the number of client alive messages which may be sent without sshd(8) receiving any messages back from the client. If this threshold is reached while client alive messages are being sent, sshd will disconnect the client, terminating the session. The default value is 3. If ClientAliveInterval is set to 15, and ClientAliveCountMax is left at the default, unresponsive SSH clients will be disconnected after approximately 45 seconds. Setting a zero ClientAliveCountMax disables connection termination.

On the client side, corresponding options exist in etc/ssh/ssh_config, but changing them occurs better on a per-user basis in ~/.ssh/config (instead of adding them manually to each connecting ssh call via the -o command-line parameter):

TCPKeepAlive no (default yes)
ServerAliveInterval 30
ServerAliveCountMax 240

The meaning is the same as above, but the roles are reversed: now, the client sends an alive message to the server every 30 s, and the client drops the connection if it didn't receive an answer from the server within 2 hours.

It took quite some time until I realized that I don't get notifications anymore on any of my Arch-based installations, but when aarchup didn't chime up even after days, I've finally noticed that there must be something wrong.

The culprit is the new autostart file coming with xfce4-notifyd 0.6.2:

[Desktop Entry]
Type=Application
Exec=/usr/lib/xfce4/notifyd/xfce4-notifyd
OnlyShowIn=XFCE;

Only show in XFCE? As OpenBox user, I feel seriously excluded and discriminated. Well, actually, we can just delete this line in a user context and thus continue as before:

cp /etc/xdg/autostart/xfce4-notifyd.desktop /home/cobra/.config/autostart/
vim /home/cobra/.config/autostart/xfce4-notifyd.desktop
G dd ZZ

😉

# Goldilocks

For our webserver, the lefh script provided by Hiawatha, which I run daily via a cron job, guarantees that the certificates for the transport encryption are renewed prior to their expiration. For our IRC server, in contrast, I have to do that manually. That might seem like a nuisance, but on the other hand, it gives me the chance to review the current state-of-the-art regarding transport encryption and to bring my configuration to this level. I've previously used ed25519 (which I also choose when generating SSH keys), but ed448 seems an even better choice.

certtool --generate-privkey --key-type ed448 --sec-param ultra --outfile key.pem
certtool --generate-self-signed --load-privkey key.pem --template cert.cfg --outfile cert.pem
certtool --get-dh-params --sec-param ultra --outfile dhparams.pem

# Tallyho

Recently, I've had a hard time with my virtual machines (VMs). With the update to kernel 5.8, starting any of them caused my entire system to lockup so that even the magic sysrequest didn't help. The problem persisted from August 15th to September 9th when it was finally solved by virtualbox 6.2.14. After the update, I immediately tended to my VMs to update them.

• CentOS: 0.012 packages – check.

• Debian: 123 packages – check.

• Archlinux: 123456 packages – ch... wait a sec, login incorrect after reboot?

My physical installations of Arch didn't exhibit such an attitude, which I thus suspected to be related to the virtualbox-guest addons of Arch (since the virtual CentOS and Debian were also behaving properly). I was wrong.

PAM had been updated to version 1.4, dropping support for the long deprecated tally module. My virtual Arch, however, is from 2009, and my /etc/pam.d/login indeed referenced this module. But there was also a login.pacnew that would have corrected this issue if I only would have bothered to handle 'pacnew' files as advised:

“These files require manual intervention from the user and it is good practice to handle them right after every package upgrade or removal. If left unhandled, improper configurations can result in improper function of the software or the software being unable to run altogether.”

I hate being in the corner with the criminally stupid, but there I am. I'll try the pacman hook in the future.

# Modern times

Since the beginning of time (or so it seems), I've had Emacs installed with the extension AUCTeX to handle LaTeX documents. And, mind you, I'm still using it from time to time! As a matter of fact, in 2018 I've worked on a number of manuscripts exclusively with Emacs to prepare myself for the editor shootout I've promised for the end of 2018 (and which may or may not be done by the end of this year). I'm still quite happy with it, that much I can say already now.

Perhaps you can then understand my surprise when yay told me that AUCTeX has been orphaned on the AUR. I was even more surprised when I saw that the maintainer was Stefan Husmann, who is also the maintainer of several hundred other packages and a moderator on the German Archlinux forum. Not the guy to thoughtlessly abandon a package on a mere whim.

And then it hit me: of course! Emacs has got it's own package manager (ELPA) some time, well, perhaps two years ... actually, eight years ago. 😣

So what's the meaning of this post? Let's say that there're one billion computer users out there. Only one percent of these know what an editor is, and only one percent of these again are actually using one. Of these again, only one percent use Emacs. Once again one percent of these Emacs users use AUCTeX, but more than 80% of these guys have installed AUCTeX via ELPA, the recommended and canonical way. I'm not one of them. Am I the only one? No, if we do the math, it turns out that there's one kindred spirit who is in the same situation like me. This post is for you, my brother in arms!

Well, as stated above, Emacs recommends installing AUCTeX via ELPA. After removing AUCTeX from the AUR, we can install manually

M-x package-install RET
auctex

or, in a properly maintained init.el (like mine), automatically:

(require 'package)

'("melpa" . "https://melpa.org/packages/" <https://melpa.org/packages/">_))

(package-initialize)
(when (not package-archive-contents)
(package-refresh-contents))

(defvar myPackages
'(better-defaults
material-theme
ein
elpy
flycheck
py-autopep8
ac-math
auctex
auto-complete-auctex
))

(mapc #'(lambda (package)
(unless (package-installed-p package)
(package-install package)))
myPackages)

and

;;(load "auctex.el" nil t t)
;;(load "preview-latex.el" nil t t)

While I was pondering the question whether this post would be relevant for anybody at all, I found these news: Levee, a vi clone, has got a new major release after 30 years. Now that's the spirit! Compared to the estimated number of users interested in this update (interestingly, the only comment in the AUR is from Stefan Husmann), my post is for the masses. To celebrate this Chucknorishness of software development, I've installed levee and prepared this text in it. It was ok (just like vi), but David Parsons will certainly understand if I say that I prefer vim for everyday work.

# SOHO system monitoring

I have to admit that when computers are concerned, I'm somewhat of a control freak: for more than 20 years, a system monitor is an integral part of my desktop. Since the last 10 years, conky fills this role. Conky can be configured exactly to one's liking and actually may be a quite stylish element of the desktop. My conkies rather display a maximum of information while still being aesthetically pleasing (for me). Judge for yourself:

You can get the configuration file here, if you are interested.

Now, having an active element on the desktop can be distracting, and I understand that this may not be to everyone's liking (although I myself feel entirely detached from the system I'm working on without this direct view into the engine room). Besides, configuring conkies is also not something you could call simple and intuitive.

If you are a private or SOHO user, and are interested in an on-demand system monitor, it doesn't really help looking at the list of system monitors available on Wikipedia. For example, in the office we are quite happy with Nagios for monitoring the health of a few dozen servers, but it would be a bizarre overkill to employ it for the few systems in a SOHO situation.

If it's about monitoring a single system, the most obvious choice is bashtop, or, after a port to Python that happened just two weeks ago, bpytop, a system monitor for the command line that I'd mentioned already in a previous post. An interesting alternative that is graphically more spartan but no less capable is glances. Neither of these programs require configuration; they all work out of the box. Here are two screenshots showing bpytop running in mosh sessions on pdes-net.org and blackvelvet, my desktop. The former is virtualized and thus lacks CPU temperatures.

If bashtop/bpytop isn't sufficient, but configuring Nagios too much, I'd recommend a look at Monitorix, which requires very little configuration, and can be be accessed with a browser. I've deployed it on my office PC for being able to examine the computing resources on this system in very great detail from my living room. And that works really well: with the help of the Monitorix protocols, I can present solid evidence when asking for more RAM or storage place. 😎 Here's the very top of the monitorix report on my desktop, showing that I'm doing just fine with what I currently have. 😞