Skip to main content

Server security

I really didn't expect that, but my recent post about our new server attracted more questions than all posts in 2016 combined. I thought that people interested in such old-school IT issues would be essentially extinct, but apparently a few still exist.

In what follows, I try to provide some answers. I've grouped the questions such that they revolve around the same topic even if they were not asked by the same person. My answers are all short except for the last one, where I elaborate on server security.

Can every 'ordinary' citizen rent such a server? Or do we need a trade license (Gewerbeschein)? Do we need special certificates?

No, anyone can rent a server. Technically, however, it is not advisable to run a server without some basic knowledge of system and network administration. I come to that point in more detail below.

What can I do with my own server? What benefit does it offer?

You can do anything you can imagine to do with a computer. And what's most important: it's yours, and no Google/Facebook/Dropbox will suddenly discontinue the service to “optimize the user experience”. And if your hoster goes bankrupt, just move to the next one—you can always rent another server (unless the government decides to ban private servers to “fight cyberterrorism” or whatever else is en vogue).

With your own server, you could, for example, host your blog, as I do. You could run a mail server, set up your own cloud, provide groupware for yourself and your family, or use it as game server and communicate via IRC, as we do at You could also install a Jabber server supporting OMEMO to provide a Skype and Whatsapp replacement for your family and friends that is guaranteed to be safe from eavesdropping.

What the heck is this Jessie and Stretch thing? Do I need to know that?

Perhaps not. But I'm not sure. If you don't know, I imagine that you are not, in all likelihood, very familiar with GNU/Linux. And that's not the ideal basis for administering your own server. Well, you could rent a Windows server, of course. I have no idea for what reason, though.

Can we rent a server anonymously?

Yes, for example here. Note: in case you want to register a domain (and who doesn't?), you want to do that anonymously as well. Otherwise your identity is revealed by a simple whois request (check, for example, 'whois').

How do you connect to the server? Can I use it also by ftp? Or by the Windows Explorer? What about smartphones?

One has to distinguish between the interface used to administer the server from the services provided by it. Regarding the former, I connect exclusively via ssh (or better to say, via mosh, an ssh replacement). I also use this way to copy files via the ssh-based tools scp, sftp and sshfs. You can get ssh clients also for Android and iOS, and you can thus administer your server from anywhere you like. Concerning the latter, you can use any protocol for which you have configured the corresponding service—in this context the smb server to allow access to the user's files via the Windows Explorer. However, I would definitely not recommend that. Rather, I'd use winscp or implement user access to files by webdav over https.

Can hackers attack the server?

Whatever you call them: there will be plenty of people trying to get access to your server. For example, in the three days in which our new server was running in its default configuration, 'lastb' revealed 6742 login attempts via ssh. Fortunately, our hoster had set a passphrase that was definitely better than the most popular one.

What did you do to secure the server and to avoid hackers taking it over?

The measures I usually take are all very simple and do not require membership of the inner circles of server adminship. The core principle is to minimize exposure by scrutinizing the software base.

What do I mean with that statement? I can illustrate that best with an example from one of my Arch systems:

➜  ~ arch-audit
Package bzip2 is affected by ["CVE-2016-3189"]. Update to 1.0.6-6!
Package jasper is affected by ["CVE-2016-9591", "CVE-2016-8886"]. High risk!
Package libtiff is affected by ["CVE-2016-10095", "CVE-2015-7554"]. Critical risk!
Package openjpeg2 is affected by ["CVE-2016-9118", "CVE-2016-9117", "CVE-2016-9116", "CVE-2016-9115", "CVE-2016-9114", "CVE-2016-9113"]. High risk!
Package openssl is affected by ["CVE-2016-7055"]. Low risk!

As you see, libtiff is listed as critical, and the exploits partly date from 2015. Better to get rid of it, right? Sure, but:

➜  ~ whoneeds libtiff | tr -d '\n'
Packages that depend on [libtiff]  aarchup  artha  auctex  awoken-icons  blueman  chromium  clipit  conky  conky-colors  cups  darktable  djvulibre  emacs  emacs-minimap  engrampa  feh  firefox  galculator  gimp  gimp-webp  gksu  gnome-keyring  gnome-themes-standard  gnuplot  gparted  gpicview  graphviz  gsimplecal  gst-libav  gst-plugins-good  gtk-engine-murrine  gtk-engines  gtk-theme-orion-dark  gtk2-perl  gtk3-print-backends  guake  gucharmap  gvfs  gvim  hplip  hsetroot  inkscape  keepassx2  kodi  libbpg  libcaca  libreoffice-fresh  lxappearance  lxappearance-obconf  lxinput  lxrandr  lxterminal  lyx  masterpdfeditor-qt5  mirage  mpv  mupdf  mupdf-tools  netpbm  network-manager-applet  nitrogen  numix-circle-icon-theme-git  obconf  obkey  obmenu-generator  openbox  openbox-themes  orage  owncloud-client  pavucontrol  pcmanfm  portfolio  povray  pstoedit  pstotext  pychess  python-matplotlib  python-pillow  python-scikit-image  python-seaborn  qpdfview  rawtherapee  ricochet  scribes  scribus  scrot  seahorse  spacefm  spyder3  sqlitebrowser  terminator  texlive-bibtexextra  texlive-core  texlive-fontsextra  texlive-formatsextra  texlive-games  texlive-genericextra  texlive-htmlxml  texlive-humanities  texlive-latexextra  texlive-music  texlive-pictures  texlive-plainextra  texlive-pstricks  texlive-publishers  texlive-science  tint2  tumbler  vertex-themes  vesta  virtualbox  volumeicon  webkitgtk2  wxpython  xfce4-notifyd  xfce4-terminal  yelp  zenity  zim

As you see, essentially everything depends on this package. No way to get rid of it on a desktop system! But surely that's no issue on a pure command-line system like our server, right?

$ aptitude why libtiff5
i   webalizer Depends libgd3 (>= 2.1.0~alpha~)
i A libgd3    Depends libtiff5 (>= 4.0.3)

Ok, let's remove webalizer. But after that:

$ aptitude why libtiff5
i   pinentry-gtk2      Depends libgtk2.0-0 (>= 2.14.0)
i A libgtk2.0-0        Depends libgdk-pixbuf2.0-0 (>= 2.22.0)
i A libgdk-pixbuf2.0-0 Depends libtiff5 (>= 4.0.3)

Who installs pinentry-gtk2 on a system without X server? WHO?

/usr/bin/apt-get --auto-remove purge libtiff5
Requested-By: cobra (1000)
Install: pinentry-curses:amd64 (1.0.0-1, automatic)
Purge: libcroco3:amd64 (0.6.11-2), libpangoft2-1.0-0:amd64 (1.40.3-3), libcups2:amd64 (2.2.1-4), libimlib2:amd64 (1.4.8-1), w3m-img:amd64 (0.5.3-34), libgtk2.0-bin:amd64 (2.24.31-1), libgdk-pixbuf2.0-0:amd64 (2.36.3-1), libpixman-1-0:amd64 (0.34.0-1), libsecret-1-0:amd64 (0.18.5-2), librsvg2-common:amd64 (2.40.16-1), gnome-icon-theme:amd64 (3.12.0-2), libavahi-common-data:amd64 (0.6.32-1), libgail-common:amd64 (2.24.31-1), libavahi-common3:amd64 (0.6.32-1), libgtk2.0-0:amd64 (2.24.31-1), libxcursor1:amd64 (1:1.1.14-1+b1), libthai-data:amd64 (0.1.26-1), libxcb-shm0:amd64 (1.12-1), libid3tag0:amd64 (0.15.1b-12), libsecret-common:amd64 (0.18.5-2), libgail18:amd64 (2.24.31-1), libxcb-render0:amd64 (1.12-1), fontconfig:amd64 (2.11.0-6.7), libtiff5:amd64 (4.0.7-5), libatk1.0-0:amd64 (2.22.0-1), libpangocairo-1.0-0:amd64 (1.40.3-3), librsvg2-2:amd64 (2.40.16-1), pinentry-gtk2:amd64 (1.0.0-1), libgif7:amd64 (5.1.4-0.4), hicolor-icon-theme:amd64 (0.15-1), libthai0:amd64 (0.1.26-1), libgdk-pixbuf2.0-common:amd64 (2.36.3-1), libgtk2.0-common:amd64 (2.24.31-1), libgraphite2-3:amd64 (1.3.9-3), libjbig0:amd64 (2.1-3.1), gtk-update-icon-cache:amd64 (3.22.6-1), libatk1.0-data:amd64 (2.22.0-1), libharfbuzz0b:amd64 (1.2.7-1+b1), libcairo2:amd64 (1.14.8-1), libavahi-client3:amd64 (0.6.32-1), libpango-1.0-0:amd64 (1.40.3-3), libjpeg62-turbo:amd64 (1:1.5.1-2), libdatrie1:amd64 (0.2.10-4)

That was an example illustrating what I meant with “scrutinizing the software base”. But let's proceed step by step and relive the few hours when I configured our new server.

  1. I first tighten the security of sshd:

on the client:

➜  ~ ssh-keygen -t ed25519
➜  ~ ssh-copy-id -i ~/.ssh/

on the server:

su -
$  vim /etc/ssh/sshd_config
Port XYZ
PermitRootLogin no
ChallengeResponseAuthentication no
PasswordAuthentication no
$ systemctl restart sshd.service

XYZ has to be replaced with a sensible portnumber, of course. ;)

  1. Relieved, I next check which services are running on the system to have an overview:
systemctl --type=service
  1. I then look for services that opened a port and listen on it. I prefer to use
netstat -tulpen

for this purpose,1 but I usually also install 'iftop' and 'iptraf' to have a look at the traffic.

1 Note that you have to install the 'nettools' package on many distributions as 'netstat' is deprecated in favour of the 'ss' command of the iproute2 package since 2011. The 'netstat' output is much more compact and readable, though.

Obviously, it is kind of paradoxical to rely on a local check on a system which might have been already compromised. I thus also use 'nmap' to have a look from outside:

nmap -sS -sU -T4 -A -v
  1. The simple tests above reveal that the server was basically prepared to run an online shop and thus has plenty of services running. Apache, nginx, postfix, dovecot, mysqld, sshd, froxlor, etc. etc. I just stop and remove all of them all except sshd:
systemctl stop <service>
systemctl disable <service>

deinstall them:

apt purge <service>
  1. After that, there are plenty of orphans that I remove with
wajig autoremove
  1. Update:
wajig dailyupgrade
  1. Ugrade:
vim /etc/apt/sources.conf
wajig daily-upgrade
wajig sys-upgrade

This last step may appear questionable, as Debian Testing (currently called Stretch) does not receive the same security support of Stable (currently called Jessie). Well, I definitely prefer Testing for its more up-to-date packages, and I think its more important to avoid packages from the contrib and non-free repositories.

  1. I then check the support status of my installation:

debian-security-support “...identify installed packages for which support has had to be limited or prematurely ended...”


Everything's supported. The more software you have installed, the less likely is this result.

  1. And finally, I search for vulnerabilities (similar to arch-audit above):

debsecan “...generates a list of vulnerabilities which affect a particular Debian installation...“

CVE-2016-2148 busybox (remotely exploitable, high urgency)

What is busybox doing here? Well, its gone.

Update: Damn, I forgot – it's needed for update-initramfs. No big deal, though: what you can remove that easily can as easily be installed again. So don't worry, you won't be able to accidentally remove the kernel or libc. ;)

After these nine steps (the nine hidden secrets for perfect server security!!!), the total size of our installation (disregarding user content in /home and in /var/www) is less than 1.2 GB.

What did I achieve so far? Well, first of all, I have stopped and removed all running services I do not need. That's certainly the most important contribution to server security as all of these services were remotely accessible. Second, I have upgraded the entire installation to a current version of the distribution in the belief that in this version, as a tendency, previous CVEs have already been recognized and fixed. Third, I have identified and removed remaining programs and libraries with security breaches rated as critical.

What can I do more? Can I rate the security of the system somehow, and monitor it?

Yes. Such a rating is, for example, offered by lynis, a security audit system by rkhunter author Michael Boelen, which provides a wealth of helpful information and advice out of the box without the need to configure anything. Great for beginners, useful for advanced users. Recommendable alone for its suggestions concerning the configuring of the ssh server. But beware and don't lock yourself out. ;)

With the current configuration, lynis gives a hardening index of 78%. I'm quite satisfied with that score (you probably won't get a 100% as long as the server is still connected to a network).

How can I make sure that we keep that score? Well, lynis is really very helpful in that respect, since it suggests, depending on the distribution, the installation of several useful tools that help in future security-related decisions.

Many of these tools, however, work best when they are executed by a cronjob in the background and inform the administrator by local mail in case there's anything to report. For this reason, it is imperative for any Linux server installation to include a functional mail transfer agent (MTA) configured for local delivery. In Debian, I always chose exim because its so wonderfully easy to configure it for this case. I'm so much used to this genie on the system, telling me about the good and the bad, that I install an MTA not only on servers, but on every system I administer (although I usually prefer postfix over exim). Here's an example taken today from


When performing system updates on Debian, I additionally like to have the following tools as little helpers in the background. apt-listbugs “retrieves bug reports from the Debian Bug Tracking System and lists them”, apt-listchanges “compares a new version of a package with the one currently installed”, apt-show-versions “shows upgrade options within the specific distribution of the selected package”, checkrestart (part of debian-goodies) “helps to find and restart processes which are using old versions of upgraded files (such as libraries)”, and needrestart “checks which daemons need to be restarted after library upgrades”. I also like logcheck which “helps spot problems and security violations in your logfiles automatically and will send the results to you in e-mail” (see above).

These tools are helpful, but I like to go one step further and have an automated, daily security check. That's exactly what checksecurity does, which, according to Debian, performs “basic system security checks”. Well, how basic depends a lot on the packages installed in addition: recommended are, among others, tiger, which again refers to other packages such as chkrootkit, “searching the local system for signs that it is infected with a 'rootkit'”, as well as file monitoring systems such as tripwire and aide that just make little sense on a rolling-release system. This fact does not diminish the value of checksecurity, of course, which I would very much recommend to install.

You certainly have noticed that I so far did not even mention the security evergreen: the firewall. Well, I do recognize the value of an enterprise-class firewall for a corporate network, but here we are talking about a software running on the very system that we desire to protect. This scenario reminds us of the infamous 'personal firewalls' under Windows, the legendary discussions on nntp:// and fefe's succinct summary:

Do Personal Firewalls improve security? — No.

Why do so many people install them, then? — Because those people are all idiots.

Well, nobody would judge the built-in firewall functionality of Linux equally harshly, and there are even one or two arguments in favor for using it. My view is that this built-in firewall is secondary compared to the measures discussed above, but it certainly doesn't hurt to use it. And that's what I do:

ufw default deny
ufw limit ssh
ufw allow http
ufw allow ...

One final word. Be careful not to overdo things. The more security-related stuff you install, the more messages you will get, and the more dramatic it will all sound. For example, chkrootkit identifies the mosh server instance running on udp port 60001 as an infection when running the bindshell test. That's a trivial false positive, but within the grip of security paranoia, it will be amplified such that it can unbalance even experienced administrators. Be calm, practice Zen, and acquire enough knowledge to immunize yourself against a fullblown security hysteria.

The end of infinality

If you use Archlinux and the infinality bundle from bohoomil's repository: yesterday's update of harfbuzz from 1.3.4-1 to 1.4.1-1 may break important parts of your setup. The reason is that infinality uses an outdated version of freetype2, and at present it seems unlikely whether we will ever see an update:

Reason: Infinality is dead both upstream and with the downstream maintainer bohoomil, and differences with freetype upstream become small as development progresses

For details, see here and here.

Instead of downgrading harfbuzz, I thus reverted to the stock versions of freetype2, fontconfig, and cairo:

pacman -S --asdeps lib32-freetype2 lib-32cairo lib32-fontconfig
pacman -S --asdeps freetype2 cairo fontconfig

The first one only applies if you have the 32-bit multilib-packages installed as required, for example, for steam.

I have not yet replaced the fonts nor is there any immediate need to do so. In fact, the current stock freetype2 seems now to offer an equivalent quality of font rendering as the previous freetype2 with the infinality patchset. Excellent!

I've just checked and found that the situation is even worse for Debian: on my mini (running Stretch/Sid), the installed version of freetype2 is 2.4.9 from 2012. Compared to that, the stock version (2.6.3) can almost be called up-to-date...

apt purge fontconfig-infinality
apt purge libfreetype-infinality6
apt install libfreetype6

Better graphic formats

The most frequently used (and abused) raster image format—JPEG—recently celebrated its 25th anniversary. Its cousins are mostly even older: TIFF stems from 1986, GIF from 1987, and only PNG, the latter's intended replacement, was developed a few years later, namely, in 1995.

What kind of computer did I have 1995? A Pentium 90 with 16 MB RAM and a 512 MB HDD. And that's what these formats were designed for. Today, 20 years later, we enjoy a factor of about 1000 with regard to CPU speed, memory, and storage size, but despite this enormous difference, our image file formats have so far remained the same.

Several new formats have been proposed in the past few years, such as Google's WEBP in 2010, BPG (better portable graphics), which is essentially owned by the MPEGLA, in 2014, and FLIF (free lossless image format) in 2015. Only WEBP is supported to a degree that allows one to actually use it, while BPG and FLIF are essentially still on the level of technology demonstrations.

This page offers a most illustrative comparison between the different lossy image formats, among them JPEG and its intended successors as well as BPG and WEBP. There's absolutely no question about the winner. Just look at Tennis or Steinway, for Pete's sake. No question, wouldn't it be for the sodding patents. sigh

But forget the patents for the moment, let's rather look at something interesting. In this post, I look at these new image formats from a different perspective. How well can they compress an essentially black-and-white line art?

Not that one should ever even consider to do that. Line art should always be stored as vector graphics, that much is obvious to anyone with even the faintest knowledge of graphic formats. Even a few scientific publishers know that. In the author guide to Nature Communications, for example, we find the statement:

All line art, graphs, charts and schematics should be supplied in vector format [...].

The author guides of most other publishers lack such explicit statements and rather breath the spirit of the 1990s. For example, in an Elsevier FAQ we can read:

Why don't you accept PNG files?

We will constantly review technological developments in the graphics industry including emerging file formats - new recommended formats will be introduced where appropriate. PNG files do not cause issues in processing, but our submission systems are in progress of updating to allow for this useful new format.

In practice, however, most publishers have no problem with accepting vector graphics in EPS or PDF format and, most importantly, also use it 1:1 for the final publication.With one prominent exception: the American Chemical Society (ACS). Vector graphics submitted to any of the numerous ACS journals are invariably converted to a raster image. Some of their author guides even include a corresponding note:

NOTE: While EPS files are accepted, the vectorbased graphics will be rasterized for production.

Regarding the format and resolution of this raster graphics, we find the following exemplary recommendation in this guide:

Figures containing photographic images must be at least 300 dpi tif files in CMYK format; line art should be at least 1200 dpi eps files.

To specify a resolution for EPS files demonstrates a complete lack of understanding of vector graphics. And in the same spirit, we read:

Cover images should be 21.5 cm in width and 28 cm in height, with a resolution of 300 dpi at this size (this should be a file of at least 8 MB).

Oh, we cannot even handle compressed TIFFs? How wonderful to work with professionals.

Perhaps as a direct consequence of the resulting size of 1200 dpi bitmaps, I have never seen any figure in an ACS journal whose resolution would have exceeded 300 dpi. At least these figures are compressed, contrary to the implicit recommendation in the author guide. Depending on the preference of the technical stuff at the respective ACS journal, the figures are included in the manuscript either as overcompressed JPEGs, exhibiting plainly visible compression artefacts, or as insufficiently compressed PNG files.

Insufficiently compressed? Yes—in contrast to JPEG, PNG employs lossless compression, and one can and should thus always employ the maximum compression level (9). Not doing so only increases the file size. The technical stuff at ACS typically invokes only the minimum compression level 1. Furthermore, the file format is invariably 8 bit/color RGB, even for black and white line art. As a result, the 692 kB of a 295 dpi figure (extracted as described here) in one of my recent ACS publications could have been easily reduced to 138 kB. Or, alternatively, one could have produced a 1200 dpi version with a file size of only 787 kB—barely larger than that included in the galley proofs.

And for all this “professional” service, we even pay handsomely. Why, then, do we publish there at all? Because of the impact factor, of course. I'll write more about this much too powerful incentive in the near future.

But let's come back now to the actual topic of this post, and consider the following grayscale line art that has been created with the help of graph and inkscape:

The original SVGZ has 21.6 kB, a PDF saved by inkscape 52 kB. Now let's see what happens if we convert this PDF into various raster image formats with a resolution of 1200 dpi.


The obvious choice of format is PNG. We can convert the SVGZ or the PDF in various ways. We could export a PNG directly from inkscape, of course. Alternatively, we could open the PDF by gimp and export it as PNG. Both are viable ways, but the CLI is actually more flexible and powerful. So let's open a terminal and enter

pdftocairo -png -scale-to-x 4000 -scale-to-y -1 -gray -antialias gray valence_bands.pdf valence_bands_cairo.png

That would be my usual way. Results in a nice grayscale png with 356 kB.

Another possibility is

convert -verbose -density 483.87 valence_bands.pdf -depth 8 valence_bands_convert.png

Equivalent to '-depth 8' is '-colorspace gray' (in this particular case). In any case, we get a file with 330 kB. Can we do better? Oh yes, by tuning the PNG compression parameters:

convert -density 483.87 valence_bands.pdf -colorspace gray -define png:compression-filter=1 -define png:compression-level=9 -define png:compression-strategy=1 def.png

300 kB! For the parameters, see here.

Now, that seems to be a fairly optimized PNG, but it is still almost six times larger than its predecessor, the PDF. That's the time of the PNG optimizers! Let's apply them to the smallest PNG we have obtained so far, the one with 300 kB.


optipng def.png -out opti.png

225 kB.


pngquant def.png

In contrast to the other optimizers, pngquant converts to a color palette! But with unexpected success:

220 kB.


pngout def.png out.png

189 kB. Needs ages. But it's the tool of the duke.


zopflipng def.png zopfli.png

190 kB. Google vs Ken Silverman: 0:1!

That's about the limit for PNG.

Let's check other lossless formats.


convert -verbose -density 483.87 valence_bands.pdf -depth 8 -flatten -compress lzma valence_bands.tiff

188 kB. Surprise, surprise: basically equal in size to the smallest PNG.


convert def.png -define webp:lossless=true def.webp

159 kB! Not bad at all.


bpgenc -lossless def.png -o def.bpg

387 kB. Not a format for lossless compression.


flif def.png def.flif

92 kB. Now that's a statement!

But still way larger than the PDF. Is there perhaps a lossy algorithm capable of creating a 1200 dpi image smaller in file size than the PDF? Note that the present graphics with its hard contrasts is a worst case scenario for JPEG and, I presume, for essentially all lossy image formats.

JPEG (libjpeg-turbo)

convert def.png -flatten -quality 1 def_default.jpeg

165 kB. Hardly smaller than the lossless variants and with the characteristic ringing and quilting artefacts surrounding every edge and corner (see below).

JPEG (mozjpeg)

convert def.png -flatten -quality 1 def_moz.jpeg

83 kB. Better than the default above, but still larger than the PDF. The compression artefacts are different from those of the default JPEG implementation, but the image is still of terrible quality (see below).


convert def.png def_lossy.webp

203 kB. Worse than lossless (but I didn't explore the various parameters convert offers for WEBP).


convert def.png -flatten def_spec.png
bpgenc -q 44 def_spec.png -o def_lossy.bpg

50 kB. I had to preprocess the image since I needed a screenshot of the final BPG for the comparison below. The result is indeed smaller than the PDF, and exhibits (compared to the JPEG) only moderate compression artefacts (see below). Very impressive.

Here's a comparison of a section of the above graphics.

BPG is certainly a major improvement over JPEG also for line art. However, nothing beats vector formats: the PDF is of similar size and is arbitrarily scalable. A version for an A0 poster would still be 54 kB in size, whereas a corresponding BPG of the same quality as shown above would be truly gigantic.

An ideal strategy for scientific artwork would look like that: line art, labels, and annotations as vector graphics (SVG or PDF), photography as BPG, stored together in a PDF or SVGZ container. That's imagery for the 21st century. And, in case you didn't notice, I didn't find any reason to mention WEBP or FLIF. For either of them, there's always a better alternative. If we disregard the patents. ;)

New server

This blog has been hosted since 2008 on a vServer powered by a single core of an Intel Core2 Quad Q6600 commanding over 256 MB RAM and a 12 GB HDD. As OS, we've used Debian Lenny, and we've since long tried to silence the voice inside our heads warning us that security support for Lenny has ended almost 4 years ago. Certainly, there was nothing much to hack (after all, these are static pages), but I normally wouldn't tolerate such a neglect, and I certainly wouldn't encourage it.

Well, we finally hauled our lazy carcasses out of their graves and managed to get a new vServer. Hardware-wise, a huge step up: two Intel Xeon E5-2680 v4 cores with 6 GB RAM and a 320 GB HDD. Software-wise, we've ordered the server with Debian Jessie which was configured very nicely, but with plenty of services we don't need. The first step was thus to clean up and to update the system to Debian Stretch, the current version of 'Testing', which in my opinion represents one of the best choices for a rolling-release server installation that is reasonably up-to-date and yet almost care-free.

From Linux 2.6.20 on our old server to 4.8.11 on the updated new one: what an enormous jump! System administration has also changed significantly: for example, to synchronize the time, one does not any longer rely on a cronjob executing 'ntpdate -s', but uses systemd-timesyncd, and instead of apt-get and apt-cache, one uses apt. Oh yes, my dear dinos, that's how it is! But since the user interface has stayed the same, it is still as easy as ever to administrate the system as long as one is able to read and write (type).

Concerning the webserver, it was haui to suggest Hiawatha. I'd never even heard about it, but after a first look I installed it (there are Debian repositories managed by Chris Wadge) and it instantly grew on me. It's small, lightweight, easy to configure, and has unique features not found in other webservers.

However, just as all other webservers I know, Hiawatha does not correctly deliver compressed scalable vector graphics (svgz). I was tired of that and wanted to avoid the need for patches, and I hence replaced all svgz by svg and the corresponding reference in all my posts:

find . -type f -name "*.svgz" | xargs gunzip -S z
find . -type f -name "*.md"  | xargs sed -i 's/\.svgz/\.svg/g'

This decision turned out to be the right one. Hiawatha transparently compresses content without requiring any user interaction, and the page size of this blog actually decreased by 50% with respect to that delivered by dhttpd, our previous webserver, despite this manual decompression.

Now, concerning the IRC server, InspIRCd seemed to me the most promising candidate. Just look at that! And I wasn't disappointed: with a little help from here and there, I had it running pretty fast. What took some time was the key generation, since I wanted the TLS configuration of the server to comply to the current security standards. After a lot of reading, I've finally generated the key and the certificate

certtool --generate-privkey --ecc --sec-param ultra --outfile key.pem
certtool --generate-self-signed --load-privkey key.pem --template cert.cfg --outfile cert.pem

and configured the gnutls section in InspIRCd:

<module name="">
gnutls certfile="cert.pem" keyfile="key.pem" priority="SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-MD5:-SHA1:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC">

Note that this is a rather strict configuration that will work not work for clients belonging into a museum. With reasonably up-to-date systems, no problems should be encountered.

I've applied a few other tweaks to the IRC server, but I won't discuss them now as I would first like to see how they perform in practice.

LaTeX vs. Unicode

I'm using matplotlib to create figures for my publications. For axes labels, legends, and everything else requiring text and symbols in a figure, I've so far used the excellent LaTeX support of matplotlib, and the results are (obviously) highly satisfactory:


There's a disadvantage, though: there are not too many fonts to chose from. Naively, I thought that this limitation would be lifted if I wouldn't use LaTeX, but Unicode instead:


And wouldn't XeLaTeX even combine the advantages of both?


As you can see, matplotlib allows you to use any of these options, but what you don't see is that the desired results can be achieved only with a very limited set of fonts. For example, there are only a few fonts that include the unicode character for a 'superscript minus' (for an overview, see here). Sadly, most of these are part of the ClearType Font Collection, which was introduced by Microsoft with Windows Vista. Free fonts with a 'superscript minus' include Dejavu Sans, Free Sans, and Free Serif. If the 'superscript minus' is included instead as a command by employing the internal LaTeX support of matplotlib, many more fonts become accessible. Examples are shown in the table below. But even then one can't make any assumptions: while Source Sans Pro works fine, Source Serif Pro doesn't. I have no idea why.

You see from my last statement that this post in not in the least authoritative. I'm just toddling around, and if you find a better way, I'd appreciate corrections and additions. That's particularly true for the case of XeLaTeX, the use of which seems to require OTF-only fonts with math table support. I wasn't even able to find a single Sans Serif font with this profile :( . Others have similar problems.

Renderer Serif Sans Serif
LaTeX Palatino, Fourier Kurier, CM Bright
Unicode Noto, Gentium Plus Open Sans, Source Sans Pro
XeLaTeX Libertinus, XITS ?

Finally, here's an archive containing the three scripts I've used to create the figures above. In each case, I let matplotlib render a pdf, convert that into an svg by pdftocairo, and compress this svg files by gzip:

pdftocairo -svg plot_uc.pdf plot_uc.svg
gzip -S z plot_uc.svg

The results are compressed scalable vector graphics that are fully compatible with inkscape if a post-processing should be necessary. That's how I got the unicode logo in, by the way. ;)

Contents © 2017 Cobra - Powered by Nikola