I only dimly remember the mid-90s, but one event sticks out from all: The opportunity to listen to a presentation given by Prof. Andy Tanenbaum during the Unix User Group conference in Wiesbaden. This was the time, when I was mightily upsetting my fellow colleagues by bringing my own device (a shiny black NeXT cube with a 400 dpi black and white printer and a brilliant black and white display) into the office, and whenever any Windows machine in the office needed to be rebooted, I would observe the number of months my machine was running without any flaw… Oh, nobody was bringing their own device then and there. I was frowned upon, but on the other hand, my reports always looked best.
Of course, there was a small group of consultants who gave me the “taste” of it, really top guys, Volker Herminghaus and Thomas Brox. I adored them, and what beautiful things they were doing with their UNIX machines, while everybody else still thought, the Windows workhorse was the bees’ knees.
Coming back to Professor Tanenbaum, who is famous for his Minix system, and many other thoughts on shared resources (you know, using somebody elses computing power and disk space through communication lines, like, cloud? Amazon, anyone?). He came to talk about layered complexity and layered assumptions (starting with the hardware developers, the embedded controller manufgacturer, the OS developers, the development language developers, the application developers….) causing an endless combination of assumptions that may be right – but could also be totally wrong, but only in certain situations.
He was illustrating this with a practical example of why a plane-crash happened shortly prior to the event: The combination of speed of wheels and altitude, airplane-speed and braking power did not take into account the isolated event of “aquaplaning” which happened due to extreme weather conditions and torretial rains. It was easy to understand and easy to follow. And it was fairly easy to learn: there is always a special condition, that we cannot take into account, because we can’t imagine it. It is, so to say, the “unknown unknown”.
Phiolosophically, the example shows us the fallacy of calculating the odds of Fukushima happening beforehand, as well as many other examples of “manageable” risks. But it also illustrates the arrogance of developers today: We rely on the layers we choose to ignore, because it is only the odd hacker that analyzes the code on machine language level to see, what really is going on. We think we understand stuff, but we are scratching the surface only, and even feel we are above the rest in doing what we do.
Tanenbaum gave me a lecture that I never forgot and that I still keep as a fond memory from the mid-nineties. He has now decided to leave his post as a professor in University in Amsterdam, and we may say thank you for a great contribution of thought and knowledge. It would be a great idea to sit in class on 23 October 2014 in the Aula of the Vrije Universiteit, which is in the main building, at 11:45 sharp. The organizers say, coffee will be served outside the Aula starting at 11:00. You may well be seeing one of the greatest minds in open source software delivering his last university lecture as a professor (I doubt he will stop thinking!). And I would assume, it will be a great lecture once again.