Tech Companies Will Be Evil When They Can, We Need a New Frameworkbr>
Hardly a day goes by without an online user data debacle. Some hacks and breaches make the front pages of the few remaining newspapers: Think of the 2013 Target breach, the 2017 Equifax breach, the 2014 celebrity phone hacks, and the Cambridge Analytica scandal. Other failures and abuses appear in trade press without attracting mainstream notoriety; for example, Twitter CEO Jack Dorsey was hacked at the end of August, 2019. Then, of course, there are the hundreds of thousands of compromised accounts, phished emails, and stolen social media profiles that afflict everyday users without ever drawing press attention. The world’s largest tech companies have promised improvement, but most computer users have only noticed changes for the worse. What is to be done?
In another era, tech companies promised they would be virtuous corporate citizens. Google didn’t promise to be actively good, but at least the passive “Don’t Be Evil” was a component of their code of ethics for many years. How things have changed! Today, individual coders and developers at Silicon Valley titans may have the best intentions, but again and again their work is used for questionable ends. Perhaps “Don’t Be Evil” was never enough. Perhaps we need a digital architecture that fundamentally prohibits bad behavior: A “Can’t Be Evil” system.
Succinctly, a can’t-be-evil architecture enforces accountability — something that has been lost in the transition to Web 2.0. The architecture is formed around three system design principles that, when faithfully applied, ensure that applications and their developers are accountable to users, and that users are accountable to each other. Because the tenets of can’t-be-evil design require high degrees of security and data immutability to be achieved, blockchains, sometimes called distributed ledgers, are the ideal tool for constructing this new internet.
The first principle of can’t-be-evil is that each user is the authoritative source of all data they produce and all identities they claim. In particular, the user must have unilateral control over where all the data they produce is stored and who can access it. Applications should not store any data of their own (they don’t “own” anything), but should instead rely on a common protocol to direct their reads and writes to the data source(s) that the logged-in user has designated.
The second principle of can’t-be-evil is that all data ownership is expressed through non-repudiable links to user identities. A user is only the owner of a piece of data if they sign it with their identifying keys. This strikes a balance between the competing needs of anonymity and accountability. When the second principle is followed, a given user may own many unrelated identities, but everyone can learn what data originated from which identities.
The final principle of can’t-be-evil is that users have the tools to proactively defend themselves from bad actors, both in the data plane and control plane (pertaining to user data and user identity accounts, respectively). In the data plane, users can decide how much or how little freedom a given application has to interact with their data. They can restrict which files it can read, write, or enumerate, and they can prevent applications from erasing previously written data. In addition, users retain the full freedom to interact with their data outside of all applications, including making new replicas, migrating replicas to new hosts, altering replicas, and deleting replicas. In the control plane, users have the final say over which (if any) globally-visible changes to their accounts materialize, regardless of the code that executes the change. In the service of this principle, users have the protocol-level power to specify post-conditions on their account state that must hold true once a requested change is applied; if the post-conditions are not met, then the control plane code cannot modify the account. This principle in particular has an immediate use-case in blockchain designs: Users can deny transactions from changing their accounts’ balances if the transactions interact with rogue or buggy smart contracts.
By adhering to these three principles, a software ecosystem can restore the “natural order” that has been lost in the transition to Web 2.0. For too long, applications have used users more than users have used applications. With a can’t-be-evil framework, applications become a user’s tools again and users stop being a product for developers to sell.