Who Controls Your Computer? (And How to make sure it’s you)
1 Introduction
1.1 Who pwns your computer?
1.1.1 This presentation
1.1.2 Do YOU own a computer?
1.1.3 Social solutions
1.1.4 The buck stops with you
1.1.5 Plan
1.2 Express Introduction to Computing security
1.2.1 What if?
1.2.2 Real Threat
1.2.3 An Arms Race
1.2.4 More Art than Science
1.2.5 Battle of Wits
1.2.6 Technique Still Matters
2 Trusted Software
2.1 The lowest-hanging fruit
2.1.1 Opportunities Galore!
2.1.2 SMOP
2.1.3 Social Challenge
2.2 Auditability
2.2.1 The Only Alternative To Blind Trust
2.2.2 Firmware is Vulnerable Software
2.2.3 Source code is not enough
2.3 Reasonability
2.3.1 Harder than Debugging
2.3.2 Formal Methods
2.3.3 Clean Semantics
2.3.4 Reasonable Languages
2.4 Modularity and Compositionality
2.4.1 Divide and Conquer
2.4.2 Familiar Properties
2.4.3 Audit Surface
2.4.4 Simplicity
2.4.5 Existing Systems
2.5 Bootstrapping Trust
2.5.1 Turtles All The Way Down
2.5.2 Bootstrapping
2.5.3 Communications
2.5.4 Starting Points
3 Trusted Hardware
3.1 Going Deeper
3.1.1 Deep Pwning
3.1.2 Hardware vs Software
3.2 Trusting the CPU
3.2.1 Blinding the lower layers
3.2.2 Randomized FPGA CPU
3.2.3 More Firmware Checking
3.3 Supply Chain
3.3.1 Randomized Spot Checks
3.3.2 Random Quality
4 Trusted Wetware
4.1 Wetware
4.1.1 Softer and Harder
4.1.2 The Weakest and Strongest Link
4.1.3 Example Challenge:   Voting
4.1.4 Foundations for Better Wetware
4.2 Decentralized Networks
4.2.1 Decentralized Everything
4.2.2 Blockchains
4.2.3 Bitcoin Management
4.2.4 Crypto-feudalism
4.3 Statistics
4.3.1 Scale
4.3.2 Bias
4.3.3 Strength vs Nimbleness
4.4 Sous-Veillance
4.4.1 Logical, Physical, Political Security
4.4.2 Anticipation
5 Conclusion
5.1 Security as Defense
5.1.1 Costs and Benefits
5.1.2 Hunters vs Farmers
5.1.3 The Ultimate Stake
5.1.4 Know Thy Enemy, Know Thyself
5.1.5 Your Part in The Game

AltExpo at PorcFest 2015 Jun 22–28, Lancaster, NH.

2015

Who Controls Your Computer? (And How to make sure it’s you)

François-René RideauTUNES Projectfare@tunes.org

Can you trust your computer? Can you control what it does and doesn’t? Or is someone else in control? How can you even know what it is or isn’t doing? Even if it tells you, how do you know it’s not lying? As computing systems become more complex, how can “Trusted Computing” be achieved, not (just) by big corporations and centralized governments, but rather by individual citizens? I’ll discuss missing bits that could help put individuals back in control: radical auditability; more “reasonable” programming languages; controlled side-effects; capability-based security; provably correct kernels; trusted bootstrap chains; spot checks of hardware along the supply chain; trustless randomization; crypto-feudalism; decentralized network protocols; digital currencies; pervasive sous-veillance; etc. There are plenty of technologies you could be working on that could contribute to making tomorrow a safer place for individual freedom rather than a totalitarian nightmare of centralized control on computing.

This essay was the basis for a presentation given at AltExpo as part of PorcFest XII on June 25th 2015 in Lancaster NH. It also made it to the front page of HackerNews in 2017.

1 Introduction

1.1 Who pwns your computer?

1.1.1 This presentation

Hello, I’m François-René Rideau, a one-man think tank originally from France and author of Bastiat.org. At PorcFest, I’m mostly known for my pony song — but I’m not going to sing it this time. Today, I’m here to tell you about who owns your computer. You may think it’s you. But many lay a valid claim to the contrary, including big corporations, governments and large mafia gangs (but I’m repeating myself).

1.1.2 Do YOU own a computer?

Who amongst you here carries an electronic device capable of universal computation? Can you raise your hand if you have a cell phone? a tablet? a laptop? If not here, who possesses a computer or another such device at home? OK, who here does NOT possess any kind of computer? Well, if you drive a car that is less than twenty years old, I have bad news for you

Now that we have established that we live surrounded by computers, and that smaller computers are on the way to be even more ubiquitous around us, there remains the question:

Which of you knows what code is running on your computers? Indeed, can you trust your computer? How can you even know what it is or isn’t doing? Even if it tells you, how do you know it’s not lying? When push comes to shove, can you control what it does and doesn’t do? And if you are not in control, who is?

1.1.3 Social solutions

Notice that this technical problem usually has a social solution: You rely on social control to keep in line those who control your computer. You trust your computers to do your bidding, because you believe that those who are in control will by and large suffer negative consequences if they betray your trust. The same goes in any and every technical field, really. No one can be specialist in everything, and we rely on social means to determine the authorities in whom we entrust technical matters.

But this solution of course breaks down when those in control of the computers are not accountable to those the computers are purported to serve; indeed, when they have a monopoly on the social accounting they will gladly betray your trust and suffer no ill consequence from that. And this has ceased to be speculation and conspiracy theories, since Edward Snowden revealed that indeed the US government has been doing just that for decades: remotely watching everyone’s computer communications, controlling the computers of anyone on their target list, etc., with no social control whatsoever — indeed, they get to punish those who denounce or resist them. And then, instead of having a social solution to a technical problem, you have a social problem creating technical problems to extend its social influence.

1.1.4 The buck stops with you

To misquote a famous misquote of Trotsky: you may not be interested in computing security, but computing security is interested in you.

And so, someone has to be the authority that computing security is entrusted to. You can pass the buck around, but it has to stop somewhere, with someone. And how is that someone supposed to do more than throw his hands in the air helplessly and use impressive buzzwords to explain problems away when they become painfully obvious? What kind of measures can an honest expert take to ensure the security of computing systems? How do you recognize the honest experts who actually take such measures from crooks who are good at playing experts towards the uneducated masses? THAT is what I want to discuss today.

This presentation is directed towards people who understand the importance of this particular issue, and want to make a difference: either by yourself taking an active role in developing technical solutions; or by funding such efforts. But even if you don’t take an active role in addressing this issue, by being aware of it, you can at least not become an active part of the problem, by not partaking in the problematic behaviors, and not funding negative efforts.

1.1.5 Plan

This presentation is divided in four parts: First, this introduction is followed by a thought experiment prototypical of what computing security is about. Second, a quick tour of how we can better write software to reduce security issues. Third, an assessment of what will become necessary to ensure hardware security. Last but not least, a discussion of the social structures that underlie decentralized security.

1.2 Express Introduction to Computing security

1.2.1 What if?

Let’s start with a thought experiment.

What if tomorrow, some large company or government started to use bitcoin on a large scale, i.e. require everyone (customers, employees, providers, etc.) to install some bitcoin software?

They would have to spend a lot of time training people. They would have to provide standardized software installation on readily available computing systems. They would become a major target for all the black hat hackers in the world. Considering how insecure all current computing systems are, pretty soon, some pirate would find a way to crack their security, insert his software on a large number of these computing systems, and elope with all the hard-earned money of all the people forced into adopting this technology. And since bitcoin transfers are irreversible, there would be nothing left for all these people but weep at their losses.

Computing security is made of thought experiments like that, because you can’t just wait for the disaster to happen: you must anticipate what would happen and take measures before it happens.

1.2.2 Real Threat

Note that just because it deals with “What ifs” doesn’t mean computing security isn’t about real threats. Many people are being arrested because the NSA and FBI or their foreign equivalent could control their communications and/or their computer. People have lost lots of money because crackers could access their secret passwords and keys. Businesses have been destroyed because of espionage.

Computer pwnage also famously happens to military weapons: Argentinian Exocet missiles have been disabled by France for its British allies; US drones have been taken over by the Iranian military by fooling their altitude meters; Iranian computers rooted by extremely clever viruses from a combined NSA and Mossad task force have setback nuclear weapon production for years.

And of course, plenty of techniques have been demonstrated that could crash a car or a plane or someone’s heart’s pacemaker. And though their rumored real-life uses have never been proven, clearly organizations within the US, Chinese or Russian governments are plenty capable of using them if they really want — if they don’t, it’s just that they usually have much cheaper means of getting at their targets.

Therefore, if computing security deals with “what ifs”, it’s because by the time the threats are more than “what ifs”, it’s often too late to do anything but cry, if you’re still alive to cry.

1.2.3 An Arms Race

And so, computing security is a game of cat and mouse between defenders and attackers. The defender has the advantage of playing the first move: he decides what system he’ll use. But once he’s played his move, it’s too late to undo it — now the attacker may take advantage of the weaknesses in the system. If a computer is only for play, if all the data on it is public and reliably backed up, if there are no secrets, not even precious passwords, then it might be enough to wait for an attack, and fix things and cleanup after the fact — though even those emergency fixes are expensive. But if a computer stores private information, or access codes to sensitive information, including banking or money in the form of bitcoins, then the attacker only has to win once to access it, whereas the defender has to win everytime.

In the above thought experiment, forcing people to use a technology that requires serious computing security discipline, when even professionals often fail at maintaining it consistently, is a recipe for disaster. Providing a large homogeneous pool of victims is also a way to attract attackers. Presumably, to allow for affordable training and technical support, the large installation will have a monoculture of identical or mostly similar computing systems; by having a strong defense team, they can probably raise the barrier to break in for attackers; but no defense is impregnable, and once a single system is taken, by preying on a weak victim, blackmailing a user of illicit drug, bribing a traitor, or recruiting an accomplice, the monoculture means that the attackers will be able to leverage the knowledge they gain to develop reproducible attacks with which to penetrate a large number of systems, helped by the fact that none of today’s systems is designed for security.

1.2.4 More Art than Science

Though computing security uses the scientific method, with thought experiments, actual experiments, peer review, and challenges, in the end it is more art than science — because science quickly accumulates knowledge about a universe that is immutable or changes slowly; but the matter of the art of computing security changes with the art itself: the more elaborate the general knowledge of the defenders and the more sophisticated their defenses, the more elaborate the general knowledge of the attackers and the more sophisticated their attacks. (In other words, computing security is anti-inductive.) Hence, game-changing theories will solve entire classes of issues and eliminate some attacks for good; while game-changing attacks will subvert the hypotheses of previous theories and render some defenses hereby useless. It’s an arms race. A bit like in biology, where virus and immune systems constantly try to one up each other, except that viruses don’t design their attacks, they evolve them, which makes them both more resilient yet less sophisticated; modern man thus has a local advantage, thanks to immunization, quarantine, and modern medicine, whereby he can use global information when the viruses can only use local information (though in a massively parallel way). With biological viruses, it’s a battle of man against nature, of wits against brute force. But in computing security, it’s a battle of man against man, of wits against wits. The predators and the preys are all humans; and one can turn into the other.

1.2.5 Battle of Wits

It’s an adversarial game where you never know for sure whether you’re losing or winning: As a defender, are you so safe that no one can afford a serious attack against you? or are you so deeply pwned that you can’t even detect that you’re under attack? Are you safe for now, but vulnerable to a zero-day attack that will be published tomorrow? As an attacker, did you actually penetrate the enemy system undetected? Or did you just penetrate a honeypot while your activity is being closely watched and traced back to you? You may be successful for now, yet will you see all your efforts cancelled by a security change implemented tomorrow? Or will another attacker capture the bitcoins and elope with them before you can complete your attack? Sometimes, it pays to wait a bit and strengthen your defense or attack before you launch your system, so you can win bigger. Sometimes, waiting too much just means losing to the adversary or to the competition.

Computing security, like other kinds of security, has this primal appeal of fighting, being us against them, etc. Except that in the case of the defender you can only see the enemy when he’s either insignificant or overwhelming, and so there isn’t any adrenaline rush unless you’ve already lost (though in some rare cases of having discreetly discovered a long term penetration, you may try to turn the table on the attacker). But of course fighting is a negative sum game, that is only ever socially productive but at the margin, when it protects the property of otherwise productive people. A security framework therefore is only worth it when it enables more productive arrangements than could be safely afforded using simpler and cheaper frameworks. Economics reasoning is always important in computing security. For instance, it’s not worth it to pay more to protect some property than the property itself is worth (after factoring in how long the protection and the property are respectively expected to last). And it’s not worth to pay more to attack some property than the property itself is worth, either. Therefore, economic considerations limit how far the arms race goes at any moment.

Last but not least, it must be understood that any system is only as strong as its weakest link. Your strong steel doors will be useless if the walls are made of paper. And even when you’ve afforded all the defenses that are worth acquiring and done all your technical due diligence — then all you’ve done is make all technical attack vectors more expensive than social attack vectors. In other words, at best, computing security will make the humans your weakest link. In other words, security, in the end, is and remains a social problem, even and particularly when you’ve solved the technical problems.

1.2.6 Technique Still Matters

But that doesn’t mean that technical factors don’t matter: good computing security can change the game by allowing better organizations to exist and be stable than would otherwise be possible. To come back to this well-known case in point: Bitcoin allows to build decentralized trust sufficient to conduct remote somewhat-anonymous monetary transactions, where previously only exchange in person and centralized systems could allow such transactions. In the end, the human factor is a limit to what Bitcoin can accomplish; yet this remarkable piece of compsec engineering has already accomplished what was impossible before.

And so, with this understanding of what computing security consists in, what are the under-developed techniques that could help individuals to achieve trust of ever more complex computing systems, in lieu of big corporations and centralized governments imposing their brand of “Trusted Computing” where they trust that they are in control and the citizens are not?

2 Trusted Software

2.1 The lowest-hanging fruit

2.1.1 Opportunities Galore!

The part of computer architecture most visibly affected by security issues is software. It’s the part closest to the end-user. It’s the part that changes most often. It’s the part with the lowest barrier to entry for attackers. It’s the part so complex it’s nigh impossible to keep fully under control yet so simple it’s easy to subvert when you find a bug. It’s the part so rigid in its behavior that defects can be relied upon, yet so contextually fluid that it’s easy for these defects to remain hidden during normal use. It’s the part that comes in so many conceptual layers that don’t fit together that it’s easy for a vulnerability to slip in the unexpected space between two layers. It’s the part that is easiest to modify, reproduce and distribute, and the most active indeed. For all these reasons and more, it’s the most urgent part of computing systems that needs improved security. And there is a lot of room for improvement indeed, before secure terminals for everyone become a matter of course, if ever.

(Note that if you’re not familiar with what software development is, this article “What is Code?” by Paul Ford is a good introduction.)

2.1.2 SMOP

Happily, the theory of how software can be secure is by and large well-known. However, the practice is severely lacking, and a fix requires a large amount of infrastructure work. But most importantly, putting this theory in practice requires an important change in mentality from software developers; and that’s where you cannot change others — but you can change yourself.

Therefore, changing the software infrastructure to make it secure is the easy part of the equation, at least from the purely technical point of view: it’s just a SMOP: a Simple Matter Of Programming. In other words, it’s a lot of hard work, but while there will no doubt be technical issues to be solved along the way, the job may be completed without requiring a stroke of genius to lift any theoretical hurdle, and without requiring a change in the laws of computing as we know them.

2.1.3 Social Challenge

From a socio-economic point of view, however, it is quite a tough and uncertain challenge: to create a secure system, one must compete with purposefully insecure or irremediably misdesigned rivals funded to the tune of billions of dollars in aggregate; one must support integration into a trillion-dollar economy, and provide for smooth migration of thousands of software drivers and applications from an ecosystem where insecure habits are deeply ingrained and incentives for security are so far insufficient, into a new architecture that may benefit uncoordinated masses, but goes against powerful narrow interests. Between strategy to wisely choose which step to take next, marketing to convince other key players to adopt such architectural changes, and a clear vision to drive all these efforts towards a common goal, this endeavour includes doing things that have never been done before and will never be done afterwards.

2.2 Auditability

2.2.1 The Only Alternative To Blind Trust

The first and foremost requirement for users to be able to trust the security of their software is auditability: the users or their trustees must be able to audit what software the device is running and what that software is doing. If you can’t know and double check that, the person who controls the software controls the computer, not you; he can promise anything he wants about the software, you have to blindly trust him with no recourse; you won’t find out until way too late, when you get personally burnt by having your computer work against you, or if there’s a huge scandal and somehow his bad faith or utter incompetence at security is obvious thanks to a leaker like Snowden.

Already, you should be using Free Software infrastructure, such that you can audit and modify the source code and check that the binary code indeed matches, using reproducible builds. At the very least, software you use should come with complete source code, as well as all the build tools it uses. It is conceivable to have toys and gadgets so sandboxed that they pose no problem even though they are not trusted; but as we’ll see below, it’s still a potential security issue that is not prudent to run at all on a security sensitive device. And if there is any bit of software that you cannot audit as part of your trusted code base, then, you should consider that it is probably already pwned, or will be soon. And since a system is only as safe as its least safe component, if you use any single piece of such software, you are at risk.

2.2.2 Firmware is Vulnerable Software

Unhappily, even if you are using free software built from signed audited sources, odds are your computer contains a lot of firmware that you don’t control. Cell phones are notorious for only being allowed to run with a baseband processor running unauditable purposefully insecure government-controlled firmware whereby government mafia goons can control your phone (not that they usually need it to control you). Only a very few security-minded cell phones completely segregate this government-controlled modem processor from having access to the main processor’s data (only the GTA04 or Neo900 are known (to me) to do it).

But most devices come with such firmware, and the NSA is notably known to have written malware that hacks into your USB controller firmware, or your power management firmware. It will take a change of culture to achieve computer platforms where all software and firmware is auditable. One possibility is that big companies like Google may grow tired of leaking their (and their customers’) secrets to the very worst governments, and start taking these things seriously and benefit the rest of the public by sharing their methods. Another possibility is that computing devices becomes so cheap and so fast that even though achieving security may cost a hundred times the price at a hundredth of the speed of insecure devices, people who care about their privacy will gladly afford the inconvenience.

2.2.3 Source code is not enough

The second requirement is that the software be written in a way that you (or your trustees) can reason about it and prove that it is indeed safe. Indeed, even black-box binary proprietary software can be made sense of, given enough time and resources, but since it is in a form where you must painfully reverse-engineer the entire design before you can start making sense of global security properties, it just costs too much to achieve confidence, at which point the software is a black box no more because you have figured it all out. It would be simpler and cheaper to rewrite it all from scratch, or to buy access to the source code. So in practice, if it’s black box, you shouldn’t trust it. But that’s not enough. Even with access to the source code, you don’t have access to minds of the authors. If the code wasn’t written specifically to be reasoned about, then you’ll still have to reverse engineer what was in the authors’ minds before you may assess that it’s doing what the user believes it’s doing, and nothing else intentional or unintentional.

Therefore, for the security audit of any large amount of software to be affordable, every program should be written in a “reasonable” way, that makes it easy to reason about its good security properties, that should be as obvious as possible, rather than it obfuscating its meaning. And not just that, it should be written in a "reasonable" programming language, where the “obvious” properties are actually what the programmer imagines them to be, instead of being ripe with subtle complexities that allow for underhanded malware to look innocuous while actually being evil. For the purpose of security, it should be easy to reason about what the program does, but also and most importantly about what it doesn’t do.

2.3 Reasonability

2.3.1 Harder than Debugging

What makes a programming language “reasonable” is the ability for the auditor to reason about programs, and thereby assess that indeed doesn’t do anything bad, no matter what inputs it receives.

Reasoning about programs is already an extremely difficult task in itself; anything that makes it harder quickly makes it intractable — especially considering the size of the code base that requires the audit. Therefore, if you want to be able to trust software running on your machine, it is extremely important that this software should be written in a language that makes it hard for anything bad to not be obvious to the auditor.

Note that this is strictly harder than merely detecting bugs: indeed, debugging a program is merely fighting nature, naturally occurring random events, and making sure that nothing bad happens in these cases, whereas making a program secure is fighting the devil, an enemy who will deliberate contrive the worst possible sequence of events to subvert your system and take control of your resources. Therefore, a code base that is struggling with bugs probably has security issues a-plenty and can hardly be trusted with controlling critical data and devices in an adversarial situation (though of course the situation can be so bad that it’s still not even the weakest link).

Finally, note that the cost of getting a proof of security, whether formal or informal, is yet again much higher and generally unaffordable if the code was not written with such a proof in mind, and the auditor has to figure out a proof after the fact.

2.3.2 Formal Methods

While it’s good for some trained auditor to be able to reason informally about a program and convince himself that the program is safe, that is a method that both doesn’t scale, and requires too much personal trust towards strangers: talented auditors can only seriously audit so much code, and it’s hard for someone else to assess both how good the auditor was and how much attention he gave to an audit. When security is very important to a lot of people, it can become worth it to use formal methods to prove that the program is indeed free from the kinds of defects known to cause serious security issues.

Some people claim that formally proving correctness of programs in general is impossible. That’s obviously false: a programmer writes a program because he believes the program will be correct, for a reason; unless he’s just a monkey typing programs at random until they magically happen to do what was intended. A proof is just an elucidation of the reasons why the programmer’s belief is rooted in reality — or why it isn’t, when looking for the proof instead yields a counter-example.

However, possible is one thing, affordable is another. It is costly to formalize a criterion of correctness, and even more so a verifiable proof of correctness. But where security is a matter of life and death, it is obviously more affordable to pay for security than to pay for lack of security. Some projects, such as SQLite, have expended so much effort in informal testing, that in retrospect they could quite probably have afforded to use formal methods instead.

Happily, quite a lot of progress has been made in the last few decades in developing tools that make formal proofs possible and affordable; also, these tools used to require a large amount of CPU and memory to check proofs, but by today’s standards, the same amount is quite affordable.

As limitations to formal proofs, however, let’s note that not only is technical correctness but part of security, it is also not always obvious what correctness properties are sufficient for security. Even when you can formally prove that some kinds of attacks are impossible, it doesn’t mean that other attacks aren’t possible, that subvert the assumptions of the formal proof. Formal methods are thus a necessary component of future efforts to reclaim ownership of your devices, but can never prove that a system is fully secure, only that some common attack vectors are defeated — which is still extremely important.

2.3.3 Clean Semantics

To be reasonable, a programming language must have clean semantics, i.e. the meaning of every program should be as clear and as simple as possible. One useful criterion for clean semantics is that a complete mathematical description of the language’s core principles should fit in a handful of pages, and the rest of the language can then be defined as well-designed libraries on top of this core. Another useful criterion for clean semantics is that the language should have a somewhat efficient compiler: indeed, an efficient compiler is a symptom that at least one program (the compiler) is able to reason about the programs enough to produce efficient code. If writing a compiler is so hard that no one could do it, then the semantics of the language is probably hard to reason with. It’s a bad symptom for a language to only have interpreters (or even worse, a single interpreter). These criteria already rule out a lot of unreasonable languages; but these are not sufficient conditions.

Ambiguity or terseness in syntax can hide a discrepancy in meaning between what the programmer or auditor believes the program does and what it actually does. An automatically-enforced coding style can help alleviate these issues, but requires proper tool support for the programming language. Subtlety or “cleverness” in the treatment of corner cases can drastically increase the number of cases to consider when reasoning about a program, and tremendously increase its cost when it leads to combinatorial explosion; amateur programming language designers think such guesswork is “helpful” to programmers, but it actually ends up making reasoning about programs much harder. The worst offenders are languages like Perl and PHP, and commonly used languages like Python, Ruby or Javascript are not so good. Meanwhile, undefined behaviors and the sacrifice of safety to speed, as in C and C++, can catastrophically amplify tiny bugs into huge security issues.

Now, for every program, we can distinguish intrinsic complexity and incidental complexity. The intrinsic complexity is inherent to the problem that the program is solving: any solution to the problem will necessarily include at least that much complexity, though differently styled solutions may displace the complexity from one part of the program to another one, according to various tradeoffs — and the displacement can be worth it or not, depending on what the programmers or the users value. The incidental complexity is any complexity in the program in addition to this intrinsic complexity; it is complexity related to suboptimal choices of representation or implementation in the software, that could have been done away if better choices had been made by the progammer and/or by the programming-language designer. Incidental complexity, whether introduced by a badly designed program or a badly designed programming language, typically causes linear slowdown when writing the code or running it, but may cause exponential slowdown when reasoning about the code, due to combinatorial explosion of n-dimensional corner cases when multiple variables or iterations are considered. That is why it is important to use languages where every fragment is reasonable, but also expressive enough to not introduce complexity in the overall program; myopia in design often leads to inexpressive languages that decrease local complexity at the expense of global complexity.

The meaning of every program fragment must be extremely predictable from what the auditor can perceive, and not depend on invisible or hard-to-see details (such as invisible whitespace, or easily confused almost-synonyms with opposite meanings). From the source code, without knowing any of the runtime context, it must be possible at compile-time to establish enough of the meaning of the program to assert its safety. Anything unpredictable should be completely irrelevant to the correctness of the program. In particular, the interactions with other program fragments should be minimal and well-defined; it should be possible to reason about the meaning of a fragment without having to constantly worry about weird interference from other fragments known or unknown. Therefore, all operations should have as few side-effects as possible. The language should thus allow the programmer to define each program fragment such that the only possible side-effects to consider belong to the limited set of side-effects all of them useful; and the system should enforce that there will be no undesirable interferences that one need to reason about.

2.3.4 Reasonable Languages

To make the world a safer place, one thing you can do is to use and foster the use of more reasonable programming practices, including more reasonable languages, than is common practice. This is unhappily quite easy, since the common practice is that most programming languages make it hard for humans and machines to reason about programs, either formally or informally.

Currently, OCaml (thanks to its integration with Coq) and Haskell (and its more advanced cousins Agda and Idris) seem to be the most reasonable languages among those that possess a large codebase of practical libraries. If you need control over low-level resources then Rust seems to be the most reasonable systems programming language.

Depending on your specific needs, other languages are available; some languages are even more reasonable though unhappily somewhat less practical at the moment, such as ATS. There even exist solutions like CompCert and VST to reason with programs written in C, where needed; though if you’re at that relatively low-level of abstraction, you might instead try Ada SPARK (and at a somewhat less low level of abstraction, there is Spec#). Then again, you can help advance the state of the art by working on systems such as BedRock.

Strongly typed statically compiled languages tend to be much more reasonable than others, at least for small enough programs — until their strictures force programmers into implementing an “extension language” at which point things can become much worse than with a dynamic language. Happily some techniques such as typed staged evaluation exist that can minimize the need for unsafe extension languages. If in the end static typing cramps your style, because you’re doing rapid prototyping, or otherwise require dynamic typing, or more clever types than these typesystems can provide, or types that vary faster than they can cope with, then you should at least use one of the more reasonable dynamic programming languages: Racket, Common Lisp, Clojure, Scheme, languages in the Lisp family, are much more reasonable than most common “scripting languages”; when resources are limited, Lua also seems to be an acceptable practical alternative.

2.4 Modularity and Compositionality

2.4.1 Divide and Conquer

Modularity and compositionality are the ability to “divide and conquer” programming problems into subproblems. These dual principles make a security audit possible at all, where it otherwise would be mindboggingly hard.

To allow reasoning with limited resources and avoid combinatorial explosion, a programming language must be modular and compositional: it must be possible to break down a system into small parts, and conversely to combine small parts together back into a system. Then, each fragment can be examined and reasoned about separately; the properties of the combination can then be deduced from the properties of the fragments and from the way they are combined, in a much simpler way than when reasoning about a random program of the same size as the combination.

2.4.2 Familiar Properties

These principles when applied to every expression of the language is functional programming: you can decompose your programs into simpler functions that you compose. Applied at the level of files, they mean separate compilation: you should be able to compile and analyze your files or groups of files separately or mostly so, only having to look at the declared interfaces of other files, not at their implementations. Applied at the level of processes, they mean capabilities: you should be able to constrain processes to only interact with each other through well-defined limited ways that you can reason about.

These principles probably apply in more ways, to any point of view you can have on your program: if the point of view is modular and compositional, the elements can be studied separately and the study of the combination can be limited to the study of the interfaces. Compositionality is just as important as modularity, and is its dual face: with it, for instance, you can build and publish safe scenarios as composite capabilities made of a choice of combinations of smaller ones, instead of having to build and verify these scenarios every time, which doesn’t scale.

2.4.3 Audit Surface

Once you have robust modular compositional abstractions, you only need to formally verify the system’s minimal trusted kernel, its programming language implementations, a few sensitive components, and a few generic scenarios. Then, without having to look too deeply at what an application does, just by the fact that it’s written in a safe language and run by the safe platform, and its types match those of a safe scenario, you can tell that it doesn’t contain any of the usual attacks. In other words, with a reasonable, modular, system, you can minimize your audit surface.

The ability to trust that interactions only happen through the allowed interface is abstraction, and is also very important for a reasonable language. This requires the definition and the enforcement of abstractions inside each program; but this also requires the definition and the enforcement of the abstractions in the data exchanged between programs; Language-Theoretic Security, the safe validation of program inputs and outputs as separate from processing, and the safe matching of interfaces between programs, should therefore be achieved by construction.

Now, even once you have the proper security model in place, and suitable abstractions to enforce it, a program may be subverting your security model by somehow going under the provided abstraction. Even if you use formal methods to eliminate the thousands of software bugs usually found in languages and operating systems, consider the RowHammer attack that takes advantage of a hardware bug in memory chips: by massively “hammering” at memory cells, you can on some machines modify “nearby” memory cells in a way that defies control by the system or programming language. Therefore, you should never blindly trust your unaudited code to run on your system, even when your system is otherwise safe and correct in all the normal cases: for malicious code will find a way to create abnormal cases where your system is not correct.

2.4.4 Simplicity

Now, another good heuristic to determine that nothing is fishy is simplicity: if the types are as simple as they need to be and the implementation as simple as it can be: if all (or most) the information about the program is in its types, from which the behavior can be deduced (maybe automatically indeed), then there is no space left for complex tricks that subvert the system. Of course, there is no absolute judge of simplicity so this remains a heuristic, that can never be fully automated (on this topic, see the theory of Kolmogorov Complexity). Yet, many analyses can still be partly automated, and many techniques can allow more declarative programming whereby more behavior is deduced from smaller specifications that are auditable. Syntactic abstraction allows you programmers to define such automation and reexport it as more refined, safer languages that make it easier to both build and audit further programs — that’s modularity and compositionality applied to programming languages themselves. Thus, even without full automation, using simplicity as a criterion can tremendously reduce the attack surface of your system.

“The main reason to always choose the simplest explanation is that it leaves least leeway for parasites to manipulate you. If you don’t pick the simplest explanation, you’re being manipulated.”

2.4.5 Existing Systems

There again, there are plenty of systems that you can use as starting points or as source of code or inspiration: seL4 is a verified capability kernel; Qubes is a Linux distribution where every application is sandboxed; Quark is a browser where every page is sandboxed; Bitfrost, the OLPC security layer, had a lot of good ideas; Android, Chrome, iOS all have some capabilities in them, though they are not compositional; NixOS is a system with pure functional packages, auditable whole-system control, and atomic distributed deployment (yet much is needed to make it secure); SAFE is a general project to build safe computing systems; E is a language designed for secure distributed computations; Coyotos was an operating system designed for security; etc.

2.5 Bootstrapping Trust

2.5.1 Turtles All The Way Down

At least in theory, we therefore know how to build ourselves small paradises of trusted code. Unhappily, we also know that even if you can trust all the source code on the system, you still cannot blindly trust the system — because as Gödel once established, the source code is not and can never be all the code. Your code always relies on semantic foundations to tell what the code means; and even if you manage to formalize these foundations, you’ll find they in turns rely on further meta-foundations, to which there is no end. It’s Turtles All The Way Down.

Indeed, Ken Thompson, in his Turing Award lecture, Reflections on Trusting Trust, famously demonstrated how, by subverting the compiler, he could introduce a security backdoor in the login program despite its source code being secure; furthermore, the compiler modification that introduced this security backdoor would also reproduce itself when you recompiled the compiler, even when compiling the compiler from unmodified secure source code. Therefore, after initially bootstrapping this attack, there was no malicious source code left in the system to distinguish a safe system from a subverted system, yet a subverted system would remain backdoored forever (well, until someone manages to subvert the subversion).

2.5.2 Bootstrapping

The solution to avoid a covert infection during system bootstrap is to audit the bootstrap itself so as to make sure it is correct. A trusted machine has to produce trusted kernels and compilers and bootstrap binaries, that have to be installed in a trusted way on the machines. How can you trust the trusted machine’s compiler that builds your trusted software? You must have double checked the way the trusted machine’s compiler itself was built, which may have happened from a slower and less efficient but smaller and simpler compiler, which itself may have been bootstrapped from something smaller and simpler, etc., which may have ultimately been bootstrapped by a small program in assembler entered directly into punched tapes, or something. In other words, there it is important that there be trusted computers somewhere, that were each brought up with a trusted bootstrap chain, and that are physically protected from tampering, and can provide trusted software from which to bootstrap other computers.

For a trustworthy bootstrap chain to be possible, each step should remain simple enough to be audited. The Viewpoints Research Institute has notably spearheaded the development of computing systems that are universal yet small enough to be audited. They haven’t focused on formal methods, or on bootstrapping larger systems, but the same general techniques could be used — though for larger systems, you could imagine that in the future, compiler toolchains (especially so for trusted systems) would be tuned towards emitting not just executable code, but a verifiable bootstrap path to the executable code. Once again, radical simplicity is essential for the bootstrap subsystems being audited to fit in one brainful. “Keep It Small and Simple” — or else.

2.5.3 Communications

When one computer downloads software from another, it implicitly trusts the other computer. It also trusts that the communication link is safe. Cryptography can help keep the data being exchanged confidential by encrypting the communication, and to ensure the integrity of the software (assuming you can identify and trust the original author or some auditor, who cryptographically signed the software). However, the main cryptographic communication protocols are broken or incomplete in many ways.

Let’s assume for a moment that the cryptography itself is working. This in itself could be improved using proven-correct, audited cryptographic libraries. They should be written in a language that precludes the catastrophic low-level failures recently found in OpenSSL. They should expose a “just enough” API that prevents downgrade attacks. And of course, they should avoid any cipher that is known or suspected to be broken (e.g. NSA-proposed algorithms and EC curves). There are good reasons to believe that proper use of cryptography works at keeping communications confidential, untampered, and authenticated — but that still leaves the issue of using it properly and avoiding extra-cryptographic attacks.

The most common communication encryption protocol, TLS, relies on trusting a number of centralized “Certification Authorities” (or CA) to identify the other party. Big mafias and governments (but I’m repeating myself) can easily subvert one of the many CAs, when they want to target one person’s communication so as to decrypt it. They can’t do it openly for everyone or people would stop using this system (although the Chinese government probably does it in China), but they don’t hesitate to use it against specific targets — still when they get caught doing it, they may be burning an expensive ammunition, so they have to restrict themselves in using this technique. What this means is that TLS encryption makes communication safe against attacks by small enemies, but not against big enemies.

Another popular encryption protocol is SSH, which relies on remembering identities previously trusted and noticing changes. However, it doesn’t provide any good way to bootstrap the initial trust or build trust in change; these things can be done, but have to be done outside the protocol, and require special configuration by experts. While SSH is quite usable by experts within an organization, it therefore doesn’t scale to general use by the general public.

PGP is considered safe to encrypt and authenticate email, files, etc., and allows for decentralized trust mechanisms, but its configuration requires discipline by trained users, and it doesn’t currently cover communication channels (though it can cover individual files exchanged over these channels). If you want to further trust in communication, you should acquire the discipline to use PGP to secure your communications. You still need social solutions to the issue of trusting other people, and they need to have physical security to protect their PGP keys — but at least the technical side of the communication problem can be solved, with suitable discipline.

If you want to further security for all, you could make these things easier and more automated, with security checklists, etc. Providing a less error-prone interfaces to managing identity, make it easier to track long-term identity and up-to-date identity denounciations, maybe integrate key management with namecoin or other systems, etc.

2.5.4 Starting Points

There are plenty of interesting languages that you may look at or start from if you want to bootstrap a trusted software environment from scratch. Maru is minimal language bootstrapping system, from Ian Piumarta who worked at VPRI. Pharo which forked from Squeak, has a small self-contained, networked, graphical computing environment. ProjecturEd offers multiple view on the same code in a reactive functional editor. Shen is a minimal Lisp dialect language with programmable type system via meta-level logic programming. Racket is a comprehensive programmable multi-language development platform; it is quite big, but at its heart is the most modular programming language architecture so far. gerbil tries to bring the essence of the racket module system on top of a small scheme, Gambit. pypy makes practical use of the Futamura projections to create JIT compilers from interpreters, techniques that should be essential to bootstrap efficient systems from specifications small enough to be audited. Some of favorite other programming environments are Slate or Factor, but there are plenty of other more recent initiatives. Or you may look at the bytecode interpreter for OCaml or more generally a simple virtual machine that implements the programming language you use. The point being — there are plenty of potential starting points to build a trusted platform on which you can do everything, including controlling the evaluation of untrusted legacy software inside virtual machines.

3 Trusted Hardware

3.1 Going Deeper

3.1.1 Deep Pwning

Unhappily, even the stronger software bootstrap chain doesn’t completely remove the risk of being pwned: for your hardware itself, or its firmware, may be covertly controlled by your enemies. It was demonstrated with IMPs at Usenix’08 that by adding a few thousands of transistors to a million- or billion- transistor chip, you can insert a secret operation mode that can covertly control the computer. Also demonstrated were network cards and routers that specially recognize "fnord" packets that are totally invisible to regular computing systems: their checksum is bad according to official protocols, and they are dropped by normal hardware and software as if they were transmission errors, without their contents being ever inspected by the user who’d want to audit them, or even accessible to the user who’d want to audit them. But they use an alternate checksum mechanism, and are specially recognized by compromised networking hardware as a covert channel used by the invisible spy coprocessor or spy processor mode to pwn your networks and your computers. It is of course dubious that even big government agencies will manage to subvert generation after generation of chips from big companies without getting noticed; but on the one hand, they don’t need to, because computers have grown so complex that they already have magic management modes and magic management processors, the firmware of which can be subverted; and second, computers are made of so many pieces that the enemy only needs to discreetly subvert the small makers of one of those pieces, if and when they want to target a particular line of devices.

There is already evidence of some mainstream smartphones being “pre-rooted” to the benefit of the chinese mafia or chinese government (again, sorry for repeating myself). There is also evidence that many network routers in key telecommunication infrastructure are backdoored by the NSA. While it is not generally believed that any government currently possesses this kind of control over a large fraction of deployed devices, there is plenty of evidence that government agencies do implant hardware add-ons or modifications to control specific people’s computers and routers. Some even suspect Intel and other chipset manufacturers that provide acceleration for cryptographic computations might leak keys in their acceleration support for pseudo-random number generation, or in other side-channels visible by the NSA. At the interface between hardware and software, spy agencies have been found to take over other people’s devices by modifying the machine’s firmware without actual hardware modification, which can be done simply by plugging in a suitably devious USB device (previously, a PC-Card device), thereafter taking advantage of “management” coprocessors and “management” processor modes to invisibly compromise a computer. Security researcher Dragos Ruiu famously claimed he had observed an elaborate multi-firmware hopping rootkit, BadBIOS; though it is unclear to many whether his original claim was supported, similar monsters have been seen and isolated, notably by Kaspersky Labs, and identified as being developed by the NSA. These are thus not imaginary threats.

Though it is possible that there is no backdoor whatsoever in any part of the software, firmware and hardware that comes with the machine you are using at this very moment, how can you ever be certain that this is indeed the case? In the long run, the only way to defend against these attacks where the computer you use comes with a preinstalled backdoor that you can’t get rid of is to insist on radically simpler hardware that is designed for verification. However, this supposes more fluid hardware specifications. This supposes breaking backward compatibility with older hardware. This in turn supposes no more closed proprietary software that requires binary compatibility at a low-level. A whole lot of money goes against radically simpler hardware at this time. And the exponential growth of hardware with Moore’s Law for the past few decades also mean that there was never a big drive towards simplification on cost-saving grounds: it is more affordable for now to let the waste accumulate, as long as it accumulates slower than the hardware grows. And so, there isn’t a big demand from the general industry at this time, but the time may come (unhappily, if it means a slow down of progress); and in the meantime there have been, are and will be many small scale initiatives to build cheaper and simpler computers, such as the XO-1, the Raspberry Pi, etc. And on the up side of the exponential progress, it may easily hide the malware, but only by diluting the malware and its importance in a deluge of goodware. By using and encouraging radical system simplification initiatives, you can promote safer hardware while still riding on the wave of progress.

3.1.2 Hardware vs Software

From the point of view of computational semantics, hardware is just like a software layer below the usual software layers. Indeed, hardware is usually itself designed as programs in specialized programming languages such as VHDL, Esterel, etc. Thus, the same general techniques of attacks and defense, subversion and protection, generally apply. However, there are also many differences between hardware and software.

Hardware is expensive to produce and install. The attacker has to pay a much higher price of entry, which is usually reserved to government entities. And then it has to constantly keep up with new hardware that keeps coming, which is hard and expensive, and a distraction from other government activities.

Hardware is harder to target. Unless it’s a custom chip for a custom user. But then, to hack it in a way that is compatible enough to not be obvious, you need to have access to the custom user’s toolchain, at which point hacking the hardware might not be the cheapest way to attack them, and even when it is, certainly isn’t the worst of the victim’s problems.

Hardware has a much lower rate of change. Good hardware will probably remain good, and bad hardware will probably remain bad. If the Enemy gets you even just once, he can keep you under control for a long time. On the other hand, if you identify the pwnage and find a way to work around it, you may defeat the Enemy for a long time, as the attacker will have sunk a lot of resources, and may have to wait a lot for a new opportunity to attack, if he can still afford it.

Hardware can be physically isolated. Whatever triggers the bad behavior can be filtered out. Physical surveillance can prevent tampering to replace good hardware with bad hardware. Physical disconnection, Faraday cages, audit of communications, etc., can cut any remote link with a remote controller.

Hardware also has less space in it for malicious features than software — and so hardware and firmware rootkits must be relatively simple, and thus somewhat simple to circumvent. For more complex pwnage, the hardware backdoor is but an entry point to a more complete software rootkit. This entry point may possibly be disabled.

All these don’t mean that hardware security isn’t an issue — it already is, and will only become more so in the future. But it means that effective defense strategies are possible, once you take into account the costs of the attack and defense strategies.

3.2 Trusting the CPU

3.2.1 Blinding the lower layers

Now, let’s assume you have a reason to be paranoid and fear that your processor will be subverted, as you run applications that big governments don’t want you to run. How do you protect yourself against such subversion from the hardware?

In a move that may look paradoxical after arguing for simplicity, I’ll add this requirement for a secure bootstrap: randomize the code, at all levels if possible, but especially at the lowest levels of abstraction, that sit directly on top of the hardware. Indeed, if some of the least abstract layers of implementation have been compromised, randomization may prevent them from successfully leaking any useful information through a side-channel or from tampering usefully with your computations.

Tampering with your code requires decoding the pattern; this is easily doable if the enemy can compromise your code at the higher levels of abstractions and can abstract the randomness away; but it is quite hard if he only has access to lower-level mechanisms that further are resource-starved and disconnected from a remote control. Therefore, to someone who sees the pattern, randomization adds a rather small amount of complexity, whereas to someone who cannot see it, things look very complex and totally random. That’s the same principle as cryptography in general, applied to the virtual machine that implements your secure environment.

By adding just enough randomness to the lower layers of your system, so it goes beyond the complexity horizon of the underlying hardware, you can prevent hardwired mechanisms from recognizing what you are doing and tampering with it.

Note that this requires every device to have its own randomness source, and all the code to go through some randomized layer — it does no good for you to implement a randomized virtual machine if you thereafter eschew the use of the virtual machine in the name of performance; at least, the non-randomized parts shouldn’t be trusted as much.

3.2.2 Randomized FPGA CPU

The above randomization principle applies to all the lowest levels of your software, whether they run on top of a CPU, or, interestingly, of an FPGA.

The problem with a modern CPU is that it has a complex setup of firmware, hypervisor and/or operating system that you don’t always control, and that can interrupt your processing, freeze your system, analyze and tamper with it at full speed using the full power of the CPU. What more, if the computer is connected to the Internet, the rootkit can summon the power of the Internet to learn how to defeat the randomization. That may quite be unlikely, but if you are specifically targeted by bad guys, is unhappily possible. Of course, if you’re specifically targeted, this will be one of the smallest problems you have.

Still, if some of your computations are a high-value target, you will want to run them on a CPU emulated on FPGA.

These days, FPGAs have much more gates than had the simpler processors of a few decades ago; this combined with the relative slowness of FPGA as compared to CPUs running directly on the best available hardware, means that an FPGA CPU will be significantly slower than an off-the-shelf solution (although the speed disadvantage may be slightly alleviated by the ability to include special instructions tailored to your application). However, if you have your reasons to distrust your hardware for computations where trust is really really important (such as, bootstrapping an entire trusted infrastructure, handling the root keys of a world-wide certification infrastructure, or executing some top-secret weapon-guiding algorithms), then, and especially as the world grows more dangerous, you’ll eventually afford the price of such a setup.

Indeed, the FPGA doesn’t have firmware that you don’t control, and cannot hide a mechanism that will analyze the FPGA in real time and tamper with it. This can help ensure that the Enemy doesn’t control that part of the system, and cannot use it to reach into higher parts of the system. The Enemy may always be hiding in a deeper part, or be intercepting the FPGA plans in your toolchain; but he will have been pushed back and his costs will have been vastly increased, for spying on your FPGA CPU is significantly more work. Moreover, the simpler and more efficient your FPGA design, the harder it will be to tamper with it without grossly breaking its measurable performance characteristics. Once again, Simplicity is a great ally of Trust.

3.2.3 More Firmware Checking

Of course, if the Enemy has direct access to your inputs and outputs before they are thus encrypted, that may be damage enough — but that will be harder to hide from a comparative hardware audit. If further you make frequent non-trivial changes, the Enemy has to keep up with you, which he hopefully cannot do by hacking the slowly changing hardware. Similarly, if the Enemy has complete control of your toolchain, He may try to use it to perpetuate his control when you build such a device; but there again, if you introduce changes and randomness faster than He can update a toolchain tailored against you, you can outrun Him. If your Enemy has such tight control that He can watch your computer at all times, and a team that will update His rootkits faster than you can modify your designs, then your system is completely pwned, you’ve already lost, and what you need to do is get physically out of their reach, leave behind your computers, run far away and start again. If on the other hand, your enemy doesn’t completely pwn you physically, then randomization and change will help.

Whichever solution you use, you’ll want to use randomization on each and every piece of firmware you can recompile (and if you really need to trust system, you should be able to recompile it all); and then you should run checks against each processor and co-processor that verify patterns specific to your randomization, such that these checks will fail loudly if your firmware was tampered with.

Therefore, if you are quite fearful, use simpler devices that may be slower but in which it is harder to hide malware; and if you are paranoid, use your own custom randomized CPU on an FPGA, much slower, but much safer. In either case, just like for software, it will be much easier to verify the design of the hardware as well as to check that the hardware does indeed conform to its purported design when this design is not just reasonable, but radically simple. And from there, bootstrap the trust in the rest of your system, safely audit a new set of trusted devices, etc.

If you have multiple systems that are not likely to all be equally compromised, you can run audits of the toolchains by comparing their respective outputs; to remain unnoticed, the toolchain would have to know when to behave correctly and when to insert its malware, and that’s quite hard even for the NSA. More generally, by blinding the device and not letting it know when to be clever about hiding, you can make it harder for compromised system to tamper data and remain undetected.

3.3 Supply Chain

3.3.1 Randomized Spot Checks

Of course, it isn’t affordable for everyone to constantly audit every system against penetration and compromise of the hardware itself by nefarious government agencies. So how can we leverage what audit can be afforded? By randomizing the audits along the supply chain.

A deep compromise that goes undetected is expensive. If you use, at random, hardware from the same pool that is being audited by several independent authorities, chances are the hardware was safe when it was built. Of course some substitution happened on the way to you or on the way to the auditors (or the auditors have been hacked), and so you need to ensure and audit physical security along the supply chain, too, and tampering detection mechanisms, etc.

Once selected for audit at a random spot along the supply chain, the hardware randomly has to be disassembled, checking that it matches the blueprints at every level, where the blueprints were themselves computed from trusted sources on trusted hardware. Electron microscopes, visual comparison of masks, and comparison by trusted robots, etc., can all help. Independent units auditing the hardware can help, too.

Checks need not be only at provision time. At runtime, too, independent computers can be randomly made to run the same code, and compare their results, in mutually-watching pairs, or not.

All these things are not yet happening. But then neither are deep hardware attacks suspected as this point. The happy times we live in! When that such attacks happen or become likely, you can bet that every big company and every big government will want to have their own system of random spot checks along the supply chain to defend themselves against other big companies and big governments.

3.3.2 Random Quality

For random checks to actually work, the randomness being used to determine which computer elements are audited and the randomness that determines which computer elements you’ll use have to both be independent from the probability that the computer elements is compromised.

Of course, these audits will reduce the surface of attack, at least as measured according to the random distribution used. But if your random distribution gives a very small weight to something that actually is plenty wide enough for attackers to breach into the system, then breach they will through there. In other words, random checks won’t defend you against what your distribution neglects to account for.

Another way your random checks can be fooled is by someone who would control the randomness, such that the working system that the auditor inspects isn’t the same as the compromised system that the user uses. Therefore, if the randomness itself is to be controlled, it can’t be left to anyone that wouldn’t be universally trusted; that is, to anyone at all. Happily, there is finally a solution for that, and that is blockchain technology.

Blockchain technology essentially creates random numbers that no one can afford to fake. This is exactly what is required for the random seed of auditable pseudo-random number sequences used in the audit process. Be careful though that to further reduce the affordability of an attack, the specific method of randomness extraction would ideally combine many blocks from many chains together in a way that minimizes any block’s contribution, as well as some secret salt specific to each user of randomness.

4 Trusted Wetware

4.1 Wetware

4.1.1 Softer and Harder

Although software and hardware may be the intermediates in issues of computing security, the ultimate actors and ultimate victims are always made of wetware. Wetware?

Software runs on Hardware. Hardware runs on Physics. Wetware runs on Biology. It’s Brains. Also called Meatware.

The actors of the system (at least until autonomous AI happens, if ever) are all humans. The creators, the victims, the criminal masterminds, the detectives. In the end, the machines serve humans, relate humans to each other, help humans build different human networks that they couldn’t otherwise build. But it is the humans who constitute the essential parts of the trust network. Discussions about computing security that forget the humans at the end points of the network are missing the big picture.

Why did I go from discussing Soft-ware to Hard-ware to Wet-ware. Isn’t that a reversal back in the direction of softness? Not at all! From the point of view of computing security, the mathematical rigidity of “software” makes it much “harder” than hardware ever can be. The physical reality of hardware, the ambiguities and approximations inherent in the behavior of any real life device, the fact that they can be attacked in ways not covered by formalism, etc., make “hardware” much “softer” than “software” from a conceptual point of view. Wetware is only more so, softer than even hardware.

I started with Software then Hardware issues, because when these aren’t solved, computing systems don’t bring more than they take when trying to empower individuals against The Man. Now that we have discussed ways that they can contribute positively indeed, let’s discuss the issues that really matter...

4.1.2 The Weakest and Strongest Link

Wetware is the softest part of security, also the most valuable part, the most expensive part, the most difficult to change, and the greatest to have on your side.

And wetware security follows very different rules than hardware or software security. Incentives matter: individual costs and benefits to action and inaction. Failure is a given: all humans fail; but you can make them succeed more often and fail less often, and you can increase the impact of success and decrease the impact of failure.

Human abilities are very different from machine abilities: most humans can’t follow rules exactly, consistently, very fast; most humans can’t understand the big picture and make rational decisions about it; only a tiny number of humans can both follow rules consistently yet override the rules intelligently when they don’t apply. Humans couldn’t replace a machine to save their lives, and systems that rely on humans doing a machine’s job will fail, badly. On the other hand, humans can adapt, assess costs and benefits, set priorities, adapt to new situations, think out of the box and imagine creative solutions, in a way that machines can’t — and won’t until it’s too late for us humans.

Successful systems can build on the strengths and weaknesses of the available dryware and wetware. However, building secure human-machine systems is a topic that is vastly under-developed. Which means, there are a lot of opportunities for innovative solutions — that you could help develop.

4.1.3 Example Challenge: Voting

Let’s consider the “popular” problem of anonymous ballots. Even if you and I believe that a monopoly state is a bad idea, “democratic” or not, or that public vote is more honest, safer and more amenable to fostering peaceful solutions, the fact is that many people believe anonymous ballots would be a great idea for whatever decisions they want to make between themselves. But is anonymous ballots feasible at all, without being taken over by The Man? It seems that the answer is “not at all”. In every country with strong political divisions, the legitimacy of every ballot is contested; I’m not even talking about campaign funding rules, major mass media bias, or constant cradle-to-grave propaganda by the Establishment, Bureaucracy and Mainstream Media. Even at the technical level, whether the left wing or the right wing wins, you’ll find people denouncing voter fraud, with plenty of evidence of double voters, phantom voters, stuffed ballots, dishonest counting, skewed voting machines, etc., at local or national elections, primaries or final elections, etc. Even if most or all this evidence is bunk, the fact remains: there is no way for ordinary people to know whether the allegations are true or false, to make a difference between real and fake elections, etc. The elections may be honest today and rigged tomorrow, the public will be none the wiser. And so why lend any credence whatsoever to a process that is impossible to audit, and that the winners of which make sure will never be easier to audit? As Stalin remarked: “The people who cast the votes decide nothing. The people who count the votes decide everything.”

Yet, there have been technical solutions to the problem of anonymous voting, for decades: End-to-end auditable voting systems elegantly solve the issue of allowing anonymous ballots in a way that everyone can check that there was no fraud. Everyone can tally the ballots and can check that each voter voted only once. Everyone can check that indeed his ballot was taken into account in the final count. Everyone can randomly interview other registered voters to check whether they did indeed vote, that noone voted in their stead, and that if they did vote their ballot indeed matches the public record. And importantly, everyone, or at least a statistically significant number of people, must indeed do these checks. These solutions are of course technically complicated; they require training, and they require technically savvy people to assist the less savvy, and ensure through enough random checks that fraud was unlikely. This is all quite costly both materially and mentally. And yet, in the long run, these auditable voting systems probably cost less than the current system of having Establishment-appointed armed men watch the ballot boxes that decide whether they stay in power, or at least that power goes to their partners in the long game that is the voting con. But it’s precisely because auditable voting systems are cheaper and safer in the long run that the Establishment will do its darned best to prevent it from ever happening. The last thing They want is to lose control.

So there’s a problem that’s solvable both in theory and in practice, but the solution won’t be used, because it goes against the interests of those key people who can decide whether to use it or not.

4.1.4 Foundations for Better Wetware

Beyond this example, though, the bigger point here is that in the end, it’s always human issues that need be addressed. Having secure foundations for your software, hardware is a necessary step for computers to contribute positively to wetware issues: You cannot build anything durable on shaky foundation. In other words, computing systems can’t help you against your Enemies if your worst, organized, enemies are the ones who will control your computing systems when they matter.

Software and Hardware are tools to enable better Wetware. Whose tools they are determines whose wetware it will be. If you don’t control your computingware, odds are you’re being controlled in other, wider ways, too. Being in control (however indirect) of your computingware is part of being in control of your own life.

4.2 Decentralized Networks

4.2.1 Decentralized Everything

This struggle to claim, reclaim and defend your computers is thus a battle of the individual against The Man. The Enemy is not just organized criminals seeking a monopoly or defending an existing monopoly (i.e. Governments). The Enemy is Monopoly of control itself — the tendency towards forceful centralization. Defeating one specific enemy without changing the structure of human relations only creates a void to be quickly filled by a new enemy. The Solution is whatever makes decentralization more stable, and makes centralized attacks less stable. Decentralized computer networks. Decentralized human networks. Decentralized everything!

For individual users to reclaim their computing systems, it means that there are strong decentralized networks of users, that communicate with decentralized protocols, can resist siege by the Big Bad Bandits, route around Enemy presence, outpace the Enemy, scatter faster than He can attack when He is strong, or gather stronger forces to attack Him where He is weak, such that in the end, the Enemy cannot control much anything.

Can properly designed computing systems be good enough at helping individuals become autonomous that they can beat more centralized systems despite the economies of scale that these centralized systems have? What more, can the systems they be good enough that they will lure users from centralized systems, despite most users neglecting the costs of mistrusting these centralized systems when they adopt them?

4.2.2 Blockchains

Recently, a great new technology has emerged in decentralized technologies: Bitcoin.

Bitcoin, the famous decentralized digital currency, is already disrupting a lot of the current monopolies on finance. It has been remarked that it allows for the emergence of a decentralized consensus on more than just money: the bitcoin blockchain can record anything, including data about pretty much anything, including other blockchains. It is therefore more than just a currency, a ledger. It is a notary. It is an arbiter. And it is much more. Yet it wouldn’t be anything if it were not first a currency: for it crucially relies on no one being able to afford to subvert its consensus.

We already discussed above an application of Bitcoin made possible by the fact that no one can afford to subvert this consensus: Bitcoin can provide a consensual source for pseudo-random number seeds, that can be used for auditably random audits. Bitcoin probably has many more applications that have yet to be discovered.

But Bitcoin’s most important application is of course to take money-making back from a centralized banking system that can racketeer you and unilaterally watch you to a decentralized network of miners where there is no monopoly and all the information is open, yet there isn’t more information than needed for the transactions as such.

4.2.3 Bitcoin Management

So let’s consider another problem, this time where the key actors who can choose the solution are the same as those who’d benefit from it: Bitcoin management.

Right now, normal people will have a hard time managing their Bitcoins. Wouldn’t it be reasonable to expect being able to easily store most of one’s bitcoins offline in cold storage, while having a few available on a trusted handy device, and being able to receive funds or tally their accounts from any moderately trusted device? Yet anyone who tries will have to face a lot of difficulties. You will have to build your own Bitcoin solution out of haphazard pieces of software and hardware that were never made to interoperate with each other (at least no sensible person anyone would dare to reveal their complete actual combination, since anyone who does would not only becomes a target, but also the template for an entire class of targets). Moreover, these pieces of software and hardware will each have pretty bad failure modes if you don’t follow some discipline way beyond what these devices help you with. You will have to track down yourself which secret key is on which device or which pieces of paper; which device and which piece of paper is out of reach of robbers, and, if you successfully manage two factor authentication, which Bitcoins to recycle should one factor have been compromised. You will have to make sure you remember each and every password, and have a way to transmit all passwords and all sensitive data to your heirs when you die, without it making too easy to compromise the whole setup while you’re alive. All that is quite hard. Certainly, a computing system, assuming you can trust it, could help you in this endeavor, but ultimately, any solution requires humans to learn and consistently apply new tricks. Yet most people can only be trained so much. Even those who are bright enough and trained can make mistakes. Any training or discipline required decreases the value of the entire setup. Regular repetition is an expensive burden when present, but its absence induces forgetfulness that may be critical at the worst moment.

The purely technical parts, computers can easily help with: encrypting keys with a password, printing encrypted keys, scanning them back, etc. Then again, so as to be usable, these systems better be geared not just for the raw technical aspects, but also for the human interactions that build trust in both the computing systems and the user’s ability to manage them. For instance, a cold storage system should divide those Bitcoin into several addresses, and invite you to pick one or a few addresses at random (importantly using randomness from outside the system, e.g. throwing dice), redeem them and retry the cold storage procedure, just so you know you can trust the system. No less important, and somewhat harder even though algorithmically trivial, because it needs to work in synch with whatever scheduling tools you are otherwise using, (possibly including many other devices, some of them permanently disconnected from the Internet) the computer could maintain standardized check-lists to make sure you follow the discipline required to manage your Bitcoins. Of course, if this discipline is too automated or too predictable, it can be systematically exploited; and if it isn’t automated and predictable enough, then it might be hard to work for you, and it probably won’t easily scale to a lot of people. Ideally, each user’s discipline can be predicted by that user but not by other people. So, can an assistant remind you to think about your passphrase often enough so you don’t forget it, but not so often that it becomes either an obsession that makes it more likely you’ll leak it or an annoyance that makes it likely that you won’t follow through? Can the assistant keep you on a useful schedule to check the integrity of your Bitcoins, and to regularly cycle all those potentially at risk into new ones? Can it help you split your keys and databanks between several trusted friends and/or trusted lawyers, so that should you die or otherwise become incapacitated, your will executor can retrieve the Bitcoins and transmit them to your spouse and kids (or favorite charity)?

4.2.4 Crypto-feudalism

Beyond the case of Bitcoin, we see that security in any domain requires specialized skills and discipline. That everyone should pay the price for everything is not only costly, but wasteful and limiting. How then can individuals enjoy economies of scale without falling into the perils of monopoly centralization? Not everyone can be a specialist in everything related to security, or in security at all, involving computers or not. But regarding security just as any field of practice, everyone can delegate to one or several experts in the field. Entrusting your security other people, along decentralized computing networks… that’s crypto-feudalism.

Crypto-feudalism is a heterarchy, not a hierarchy. Unlike late-stage feudalism, it recognizes no top-down monopoly, with a king who lords dukes and earls and barons who rule over commoners, no hereditary numerus clausus where titles are handed down amongst members of the Establishment. It’s not the monarchic end-game of old dying european feudalism, with which it was confusingly identified, squeezed into subservience by the monopoly above them. Crypto-feudalism is more like the decentralized rule of Common Law amongst the free germanic people before they became the master race that ruled for a thousand years and beyond over the slaves of the former Western Roman Empire; but unlike the laws of this German people, crypto-feudalism rejects both authority over the slaves below and authority from rulers above: it abides by Ayn Rand’s motto, “I swear, by my life and my love of it, that I will never live for the sake of another man, nor ask another man to live for mine.”

Remain only radically decentralized networks of interpersonal security protection agreements. Crypto-feudalism is purely bottom-up and voluntary, with individuals freely joining into associations and confederacies, and freely entrusting or stopping to entrust their security to whichever freely competing experts are currently most apt to defend them; these experts themselves delegate to other experts, in matters they are not the most efficient, etc. Crypto-feudalism is like panarchy, where everyone chooses his own rulers, with complete right to exit one’s contract and to enter the market as a new competitor.

Thus, you rent the services of someone who will ensure your computing security; your protection experts will keep your devices secure and well configured; they will make sure untrusted software doesn’t escape its sandboxes — when it exists at all; and they will physically isolate devices sensitive holding data from device running contentious code. Importantly, your crypto-protectors will also check that you properly partake (or that they take your part) in the many levels of distributed audit protocols that keep the system secure; because even with experts by your side, security can never be purely automated without any training or discipline involved.

4.3 Statistics

4.3.1 Scale

Now, there are some advantages inherent to centralization — as well as disadvantages. If these advantages (minus disadvantages) are so great as to give would be centralized authorities power over other people’s computers, then a centralized power grab becomes inevitable. How thus these advantages be balanced, countered or bounded, such that decentralization remains stable?

Big players who possess a lot of data and a lot of computing power, can do what other people can’t do. The big players can use statistics to see patterns emerge. Small players can only run statistics on public data… and by definition they have access to less data and possess less computing power; they can to a point share data and computation with other people; but lacking centralization are limited in the trust they can lend to other participants’ data and computations — for the Enemy can and will hide among the participants. What is omitted from the public data? How deep can you go with it, and how much deeper can the big players go?

On some things, Big companies and Big governments will always have an advantage. On other things, they will always have a disadvantage: they can’t be as nimble, as adaptative, as creative. Whatever evils they indulge in also means they can’t trust their own people lacking morals, and have to spend considerable effort in propaganda and indoctrination, and must forgo the talent of more honest, less malleable, less narrow-minded people. Internal power-struggle inherent in a structure where power is key also limits the efficiency of the machine: players will conspire against each other, waste energy in political in-fighting, avoid recruiting or elevating people talented enough to become potential rivals, or even workers with enough gumption to dispute orders or contradict their chiefs, etc. The machine will work only when the interests of its in internal actors (starting from the top down) align with the interests of the machine. Therefore, the machine will be great at acquiring and maintaining its strangehold on power, but bad at doing anything useful with it, including whatever additional lofty or lowly goals anyone, powerful or not, would like this power structure to help them with.

The question is then by what innovations can decentralization forces be kept strong enough to counter the pressure towards decentralization, so the big players can’t use their advantage to control everyone?

There are many ways that individuals can evade control by large institutions, even though they can never beat it at its own game of massive force. Apparent conformity displayed and Discretion in their actions allow them to cultivate dissidence in private. Privacy is thus important, and the technical ways to achieve it often precede (but do not replace) the accepted standards in forcibly asserting it or suppressing it. Randomness in behavior makes it harder for the controlers to distinguish noise from information, and can also more create space for individual liberty. Wanton randomness can help you flout control. They are many ways to make the watchmen (who shall watch them?) blind to the patterns that matter, even when they can see all the details they care for. Nimbleness allows individuals to react faster than can large bureaucracies, so they can change behavior, run ahead of spies and enforcers, detect threats and adapt faster than the Enemy can. This may come at the cost of not investing too much in any stationary structure easily detected and targeted by the Enemy. By understanding the power structure, you can avoid confronting it where it is strong, you can stay out of the focus of its attention, and at times you might even play its elements against each other. These are all technologies that you can work on.

4.3.2 Bias

Bitcoin and other protocols can be geared to make it hard even for the big guy to bias decisions. Of course, the problem with making things too hard to bias is that if and when the Big Enemy finally manages to take over those things, it can become very hard for the public to recover from the bias It controls.

Consider elections, on whatever topic elections might be legitimate, if any. In a centralized network, there can be large scale censorship of votes and/or stuffing the ballot boxes, in addition to copious propaganda. The citizens will be none the wiser when 100% of elected representatives are members of the Establishment (though sometimes they will be junior members rather than senior members). With a decentralized protocol, most people can’t be trusted to do the checks by themselves… but they can delegate! (see again crypto-feudalism above). Better social control of election mechanisms could thus help make it harder for an Establishment to control the ballot. (Of course, this doesn’t address the inherent flaws of any Democracy, even when it technically runs well.)

4.3.3 Strength vs Nimbleness

Rulers want you to submit and will prevent you from coordinating. Indeed, they stay on top thanks to their monopoly on coordination; if you could somehow coordinate everyone against them, even when they have so much money and experience being the coordination monopolists, You’d soon become Them. And then you’d become the one impossible to beat. So coordination will always be an advantage they have against commoners; but it is an advantage they do not have against each other! Indeed, all they know is force, but amongst each other, who by definition are each more powerful than commoners, force is less effective at keeping each other in check. And so, though more coordinated than the public, they will actually be less cooperating than the public, always playing negative sum games with each other, and unable to quell divergent interests when by definition of their power they can’t be made to obey as easily as commoners. Truth is, amongst each other, they are in a state of anarchy. You can thus learn to play them against each other — or, when in need, find someone who knows how.

However, despite all your efforts, when in a confrontation, they will tend to win most of the time — that’s the very definition of their having power. We may like to live in a place like Lake Wobegon, where “all the women are strong, all the men are good looking, and all the children are above average”. But no, most people will not be able to fare better than average against The System. However, the hope is that if the average man can be made to evade The Man better, then eventually, The Man won’t be able to collect enough taxes to justify his efforts, and will starve (or reluctantly get a real job).

The Bad guys are bigger than anyone. Frontal assault against them is suicide. But, they are much smaller than everyone together, so they can’t do all out frontal assault, either. And so, they may have strength for them, but in a game of sneak attacks and nimbleness, you can probably learn to run faster than they do. They can run after anyone, but they can’t run after everyone; they can only afford to run after a few; and you can make sure it won’t be you by not being a direct threat to them or a designated target.

4.4 Sous-Veillance

Mass Surveillance accompanied by discretionary enforcement of arbitrary laws that make everyone a criminal gives total power on the mighty to do anything to anyone at any time. Whereas secrecy on their own activities put the mighty above retaliation, and makes the public incapable of reacting and coordinating against the injustice.

If they can see you all the time but you can only see them when they want, they win, and you can never run away. The end game is 1984 and the descent into totalitarian bankrupcy. Even when you haven’t been killed, arrested or harassed by bureaucrats or their protected criminals, you live in constant fear, never knowing whether they are on your tails or not, and what, if anything, you can do to avoid their wrath. Your fear becomes your own prison, while your daring becomes your downfall — either way you lose.

If on the other hand you can always see them even when they don’t want, and based on that information you can run ahead of them, or better hold them accountable… then they lose. And that’s Sous-Veillance.

Can they watch everyone in a panopticon? Well, they are human too. The public can watch them back. Leak the data collected about them. If you can’t often reach the big baddies, you can reach the small and mid-level baddies, and make it harder for the big baddies to find faithful henchmen. If they end up better watched than they watch the public, then in the end game there can be Vehmic Courts and their power is over.

They launch drones; the public can launch counter-drones. They install cameras; the public watches them with the same cameras, plus cellphones. It is unlikely that the trend towards more sensors everywhere will go down rather than up. We are facing an era of pervasive surveillance. But it can also be an era of pervasive sous-veillance.

4.4.1 Logical, Physical, Political Security

One thing is to make sure the software, hardware and wetware you know you have is designed for security. Another thing is to know that there isn’t spy software, hardware or wetware you don’t know you have.

It doesn’t matter at all that your software is written with safety in mind, if you let your enemies access your computer and install malware to control it — or if you install it yourself on their behalf because you are confused about some option given to you. Logical security comes before other software security. It doesn’t matter at all that your computer has no spy software running, if you are watched by nearby electromagnetic wave receptors, or by good old fashioned hidden cameras and microphones — or if you let your enemy access your computer and plug a spy device into it. Physical security comes before other hardware security. It doesn’t matter at all that your computing systems are completely secure, if you entrust your enemy with secrets, let them decide for you or act for you — or if you let your enemy teach you how to think. Political security comes before other wetware security. Logical security, physical security, political security — don’t bother with the elaborate stuff. Even without any degree in any kind of engineering, you can improve computing security and life security by helping yourself and others control who has access to their bodies, their minds and the electronic extensions thereof.

Safe hardware, software and wetware toolchains are a great tool against mass surveillance and mass control, because it means that the bad guys have to specifically target you to spy on you. But against specific targetting, it is only the start of a discipline that could bring you security. Worse, as the price of hardware is constantly dropping, at some point mass surveillance through a swarm of mosquito-sized robots will be a fact of life. Ubiquitous cell phones, the upcoming revolution of wearable computing, etc., all promise a future where everyone is spied upon all the time by hundreds of devices that you will be carrying without even being aware of them — or that other people around you will carry.

How then, can nano-drones be contained? Have your own swarm of counter-drones to wage a war against the first drones? The Feds Will probably try to prosecute those who wage such a war — on the other hand, as prices drop, not just the Feds, but all kinds of evildoers, will have swarms of nano-drones to fight, and people from other countries will encourage defense against any one country’s drones; therefore a blanket prohibition against all anti-drone defenses isn’t tenable, even though it will be attempted at times. The future is hard to predict, but the legal battle may be an important component of it.

In any case, if you let your enemy control your data, your device, your mind... your life is not just compromised, it’s fully pwned. Suitable external access control is as essential a component of systemic security as is good internal organization discipline.

4.4.2 Anticipation

Good science fiction makes a point about the actual world. Its counter-factuals serve to amplify the potential far-reaching consequences of some factual concepts, on which the focus is set.

On this topic, we may find a lot of inspiration from authors of both Science Fiction and non-fictional anticipation. The classic about surveillance is of course George Orwell’s 1984, although we cannot dismiss the “softer” but no less totalitarian approach taken by the no less classic Aldous Huxley in Brave New Word. Vernor Vinge, in A Deepness in the Sky, describes (among many other things) further extremes of surveillance as well as those of subverting the lower layers of a distributed computational system. David Brin argues in The Transparent Society that you can’t stop surveillance, but you can decentralize its power. Cory Doctorow tells the maybe naive fable of sous-veillance gone right in Little Brother. David D. Friedman, in Future Imperfect discusses how technology can among many things change the future of privacy, one way or the other.

5 Conclusion

5.1 Security as Defense

5.1.1 Costs and Benefits

Ultimately, secure foundations to human organizations are but a means that do not matter in themselves; it is what you build on top of these foundations that makes the whole endeavor worthwhile — or not. The benefit is in all the things that humans can do when they are free to create.

Of course, the cost of getting these foundations right should be accounted negatively, on the cost side of the equation that decides whether these computing systems are worth the benefits they bring (if any). Some projects are not particularly security sensitive and do not require strong foundations (besides avoiding obvious mistakes such as building on quicksand). But where robustness matters, it is very important to get the foundations right the first time over, for it will cost you much more work to redo the foundations of an existing building — or to fail to. And when solid foundations matter, you should pick the right tools to dig those foundations: the heavy machines and explosives that will enable you to dig deep down to the bedrock (i.e. formal methods), rather than the spoons and shovels that require intensive labor and can’t dig through any kind of rock (i.e. diddling around with scripting languages).

We could expand this metaphor of foundations as necessary to build strong walls that won’t be easily undermined to defend a territory that may or may not be worth the cost of defending against the attackers attracted by its wealth. First, building solid foundations is useless if you don’t actually build strong walls on top: a very secure underlying system on top of which totally insecure user code is run is still insecure don’t blow your total defense budget on just one overprotected piece when it is the weakest spot that matters. Next, let me then note that however strong the walls, it ultimately takes humans to defend them, and when they are attacked the goal is to survive until reinforcements may break the siege. If you can’t man your castle enough to defend it (or to even notice that it is under attack!), or if your mutual defense network can’t muster the reinforcements, your castle can never protect you no matter how strong its walls or their foundations. The same goes whether the system you’re defending includes automated computing or not.

5.1.2 Hunters vs Farmers

I’d like to stress this point: without the ability to fight back and attack, defense systems can but delay the inevitable. Herbivores can hide in herds and occasionally repell an isolated lion. But ultimately they can only delay the time that a predator will eat them, when they make a mistake, when they have bad luck, when they are weak, when they are old — eventually. Meanwhile, the defenders have little practice in actual confrontation, don’t usually see the Enemy until it’s too late, and have little information to calibrate their defenses, whereas the Enemy chooses the battles, learns at every fight, hones his skill and knows his victims. Pure defense is a very unequal, losing game; it might be good enough for a long and free life when the Enemy is merely a pack of hunter-gatherers, of brutish animals with superior weapons. However the Enemy we face is not a mere pack of hunters; it’s a tribe of farmers.

A hunter takes away a few animals at the margins of the herd, once in a while, while the animals live by and large free. A farmer permanently captures and enslaves the entire herd, and turns all animals into a caged culture of meat, milk and other raw materials, plus occasional sex toy. A hunter targets the weakest so as to feed his immediate hunger, with the evolutionary effect of keeping the remaining herd strong and healthy. A farmer breeds livestock for his own purposes, with the evolutionary effect of transforming the herd into helpless masses of foodstuff incapable of sustaining themselves in the wild. A farmer will specifically cull the strong and the free and extinguish any line that would be an inconvenience (not to speak a threat) to his power, so as to ensure the captured herd shall remain entirely weak and submissive. And, if He can get away with it, who’s to blame Him? Vae Victis. It’s the loser’s fault if he’s too worthless to successfully stand up for his own Freedom. Most of us have no compunction eating meat. The Enemy will have none either when devouring all that makes up your life. It’s a completely different kind of Enemy, who has a different kind of attack model and requires a much more advanced defense strategy — one that to succeed must include fighting back at some point.

5.1.3 The Ultimate Stake

Let me stress what is ultimately at stake: Liberty itself. In this race between human farmers and free humans, there are only two possible end games: in the end, either we extirpate farming, by exterminating any would-be farmer, or they turn humans into cattle, and extirpate any free will by exterminating free men. It is a war of extermination between on the one hand the psychopathic elite bent over the domination of sentientkind, their utterly evil “sheepdogs”, and their utterly stupid herds of sheep, and on the other hand, those who want to live exchanging value for value between free humans. For now, the prairie is large and abundant with grass and water everywhere; thanks to exponential technological progress, it is enough to run faster and further than the Enemy, to hide from Him, and let Him eat the weak while the healthy run ahead and hope to find a paradise on the other side. But eventually, the prairie will end, there will be no space left to run away from the Enemy, and even if there’s a bountiful garden on the other side, it too is finite, and it too will be invaded by the Enemy. Some place or some other, there will be no way left to run to, and it will be necessary to make a stand, or surrender. Maybe that garden will be a better place to make a stand. Maybe not. But if and when there is no more place to run to — the option of running away will soon enough be closed.

On the other hand, the elite is not a separate race of “lizard people”, as in many conspiracy theories or spoof thereof: it is made of humans, albeit selected for their inheritable sociopathic traits. Which of course only makes them all the more criminal than if they were animals or demons devoid of human moral agency. These rulers interbreed with their cattle, coopt the most sociopathic individuals from cattle families or demote their least sociopathic members down to cattle status. Because their physical needs and desires are mostly shared with those of the cattle, it isn’t economical for either them or their victims to build completely different and separate production infrastructures; whatever progress is made on medicine or on computers or on faster and safer transportation will be largely shared. The elite can of course afford the very best of what the market will offer; but it goes against its interests to hinder production improvements that benefit everyone, and so it will tend not to — at least as long as there remains competition within its ranks, so those who take good care of their cattle prosper more than those who don’t. A large economically driven elite will breed free range cattle for its superior quality and productivity, and the ease of sharing progress with it. But then again, between a short-term horizon, stupidity and sheer wickedness, there are many reasons that rulers would go against their material interests. And when the power to decide is so concentrated that the few rulers already have all their material needs sated with stolen goods, then material interest is just not in the rulers’ interest; instead they are driven purely by political, sexual, ideological and tribal domination upon other human beings; and that’s when the situation of the cattle is at its worst.

5.1.4 Know Thy Enemy, Know Thyself

When devising a war strategy, it is important to acknowledge what the stakes are; and then to understand who the enemy is, what are his goals and purposes, the incentive structure of his organization, his strengths and weaknesses, his vision and blind spots, his skillset and incompetence set, etc. As Sun Tzu famously wrote: “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.” The smaller enemies are economically driven hunters. The bigger enemies are politically driven farmers. They are very different enemies that ultimately require different security strategies.

And of course, your strategy will similarly depend on knowing yourself, and on recognizing your allies and partners; what are their goals, etc.? What kind of technical structures can you build with these other people that will let them resist attacks of your common enemies? Can you sustain a siege long enough for reinforcements to come (and can you find those reinforcements)? Can you hide long enough to prosper and move on before you’re found? Can you see the enemy from afar and run away ahead of him? Can you strike the enemy then disappear out of his reach? Can you organize in cells able to collaborate on attacks against the enemy, yet not knowing enough about each other so taking out one can take away the entire network? Can you teach a significant subset of a population to defend, so that the enemy has more costs than benefits in attacking anyone? Can you turn the enemy’s men to your side, or against each other, and cause mayhem in his ranks? What sphere of freedom can you defend at any moment with each strategy?

In between the two, there are plenty of permanent or temporary partners and allies of either party in the struggle, and they each will act according to their own costs and benefits. You can trust the masses to follow whoever is powerful, and turn away from the weak. You can trust them to never rise violently, except to put into power even worse violent kind.

Facing the orwellian threat of successful world-wide human farming backed by advanced indoctrination and surveillance technology, the only solution might be to run forward towards the Singularity. But this solution is only valid as long as you can keep running faster than the farmers... and as long as there’s room left ahead. Eventually, possibly as part of this Singularity event, it will be time to turn back, face the Enemy, and slay It — if running far enough ahead, you find technology that makes it possible. The predators stay in place because they can nip in the bud any rival organization before it becomes serious opposition — or else coopt its leaders into the predator class, which is all most of them want. They will know to kill you when they can; if you stay your hand, it will be a fatal mistake; they will not do the same mistake.

Beware: in general it’s neither necessary not sufficient to kill the current monster master; what is necessary and sufficient for freedom to win is killing monster masterdom itself. If the technology you find is merely violence aimed at the weaknesses of the current monster master, all you will have done is replace it with a better monster master, after unleashing violence. (although if that technology exists and you found it, it is still your responsibility to put it in the best hands rather than the worst, even though it won’t solve this bigger problem). As for expecting a large loosely coordinated population to violently raise against its monster master and succeed, not only would such an event be an chaos of blood, it is unlikely to happen but through the demographic explosion of a genocidal religion that itself is stable despite loose coordination, being based on stably and simply channelled primal urges to violence and domination, e.g. Islam. While it might be a problem for the predators in place (or rather their successors, for whom they frankly don’t give a damn), it is no relief for freedom lovers.

5.1.5 Your Part in The Game

Whether you are good at Software, Hardware or Wetware, there are plenty of technologies you could be working on, that could contribute to make tomorrow a safer place for individual freedom rather than a totalitarian nightmare of centralized control on computing.

First, and by all means, take reasonable steps to protect yourself. You don’t need to run faster than the lion — only faster than the slower gazelles that get caught. Whichever game you decide to play, learn the game before you play it, and avoid all the usual stupid mistakes by which the losers get caught. But understand that none of that will change the game.

However, mind that in a technical arms race, incrementally improving the average capabilities of one side changes nothing to the toxic dynamics. Incremental changes are part of the game and will just call for corresponding steps from the other party. Only decisive steps that can’t be balanced by the other side can and will ultimately decide the case. And once an decisive advantage is gained, the complete destruction of the Enemy is required, for He will not hesitate if and when we ever falter and let it gain an advantage over us.

Let me conclude. You and I want to build a safer world, where individuals control their computer software and hardware, their brains and bodies, their persons and their properties, rather than be cattle under the control of a gang of farmers. But there is no shortcut. It will take a lot of efforts by a lot of people, who will each have to change their way, and learn habits of good computing hygiene and thinking hygiene. None of us can change the whole world at once; we can each only make a small change. Yet the only thing that will ever change the world is but those small changes, and my how the world changes big time as a result. You and I can and must do our part in changing the world: by changing ourselves. In the words of Gandhi (disputed), “Whatever you do will be insignificant, but it is very important that you do it.”