The Bergofsky principle
If you don't live under a stone, you are sure to have heard about the NSA data affair. All media cover it, and it's on the front page since weeks. You certainly know the term 'whistleblower' by now, and you have an opinion about Edward Snowden, traitor or hero. Perhaps you tend to the latter, and you are now unsure what to do. If so, let me congratulate you. People voting for the former do usually not prove to be particularly pleasant or bright individuals.
I've seen many excellent comments in both newspapers and blogs on the political dimension of this affair, but essentially nothing substantial on the technical aspects. Sure, one can suddenly find all kind of beginner guides to mail encryption, but the views on the effectiveness of this measure vary widely. Self-appointed computer experts, for example, criticize the media for stating that breaking asymmetric encryption takes the NSA "many" years, and boast that it would take them in fact 10290 times the age of the universe for a 2048 bit RSA key.
On the other hand, I've recently overheard the conversation of some young wanna-be scientists, and one of them elaborated on the reasons why encryption is futile. He was quite explicit and stated that the NSA can break any encryption in no more than 10 min, and quoted the "Bergofsky principle".
The "Bergofsky principle"? But of course, how could I forget that: "The principle clearly stated that if the computer tried enough keys, it was mathematically guaranteed to find the right one."
Says Dan Brown in Digital Fortress, St. Martins Press, New York 2000. 😄
In reality, there's of course no "Bergofsky principle", but the exact opposite: mathematically guaranteed unbreakable code since 1882. It's not even that difficult to produce code which is at least close to unbreakable. I will devote one of my future entries to this topic. 😉
Right now, however, we are concerned with the more serious question of how secure conventional e-mail encryption (using PGP or S/Mime) actually is.
Well, regarding conventional (as opposed to quantum) computers, that's public knowledge. Breaking RSA keys takes a certain and documented effort which can even be predicted (asymptotically). RSA-576, for example, was broken in in 2002 after two years of number crunching on the cluster Parnass2 which was listed in the TOP500 in 1999. To break RSA-768 in 2009 required 267 instructions which would have taken Parnass2 more than a hundred years, but took once again about two years of computational effort on a more advanced and larger cluster.
Let me put that into perspective. Parnass2 had a peak performance of 30 GFLOPS (about one third of my current desktop, counting only the processor). The Earth simulator, number one of the TOP 500 in 2002, had 30 TFLOPS, and Tianhe-2, the current number one, 30 PFLOPS. In other words, Tianhe-2 is about one million times faster than Parnass 2. Assuming that the NSA supercomputer under construction is able to deliver a 100 PFLOPS for RSA decryption, we can easily determine the time it would take to break a single RSA key of a given size:
**bits time** 576 3 s 768 30 min 1024 20 days 2048 65 million years 4096 4 billion times the age of the universe
Looks like we are safe from 2048 bits on. But if supercomputers progress like they have in the past, it will take only 65 years to break a 2048 bit key in 2028, and that's cutting it close. And what about quantum computers? D-Wave just sold a number of 512 qubit systems, with Google and NASA being on their list of customers. They announced to have a 2048 qubit version soon. If the D-Wave 2 is a true quantum computer capable of executing Shor's algorithm, messages encrypted with 2048 bit keys are virtual clear text.
Perhaps that's not very likely. But it won't hurt to think about alternative asymmetric encryption algorithms not based on integer factorization or discrete logarithms.