Skip to main content

Trust

When UNIX co-progenitor and super-smarty-pants Ken Ritchie was given a Turing Award, he provided a warning to those within ear shot. Admins and developers often find it satisfactory to review the source code of applications to determine maliciousness. And to a certain extent, this works out all right. Over time we have built a series of expectations of where to expect naughty code based on our experience. We have also chosen to trust other types of tools that we use during this process. We discriminate.

But there's no reason that bad stuff *has* to be in the applications that we expect to find it in. Yes, the clever among us know that compilers can be bad. But we check the source of our compilers and find no bad stuff, and so we assume we are safe.

We do, though, compile the compiler, don't we?

Well, alright then some megalomaniac at Intel or somewhere far upstream decided to embed badness in the embedded distro compilation software. We can still look at the binary of compiled programs to determine What Is Really Going On.

We do, though, tend to use applications to help make machine code human readable, though, don't we?

The point to the thought experiment isn't to stop the unbelievable interchange of ideas and applications that has brought us to where we are today in modern computing. It would be impossible to manually read the machine code of all of the applications that we use and still function in the workplace and our other communities as we are expected to.

Rather, we should be aware of the trust that we place in our tools. We should be aware that when we set out to solve or review problems, we take certain things for granted.

The point is not, that we should never trust. The point is that we should make trust decisions willfully and based on reasonable deductions and facts; not impulse or ease of use.

I came across Ritchie's chat today in a CS class, and its still as relevant today as it ever was. You can read the whole thing here.