I read a recent Atlantic article about the “coming” software pandemic with some surprise — because the pandemic isn’t coming, it’s already here. Bad code has existed as long as there has been code, and we have been suffering the pandemic of bad code at scale for decades.
A question we should all be asking is: How many people have died from bad code? The answer is certainly not zero. But what is it? It’s a number we should probably know. If you want to have trouble sleeping at night, check out the Therac-25 mess in the mid-1980s.
From the flash crashes to 911 service dropping in New York recently, to Equifax failing to patch a decades-old computer system, to autonomous driving accidents, to military systems missing their targets and killing civilians, to Russian hacking in the 2016 presidential election, we are living in a world of bad code.
The status quo for dealing with bad code is … well, there isn’t one. It’s left to organizations and individuals to clean up. It’s so random, in fact, that more than a few companies offer “bounties” for finding bad code. Google recently expanded its own bounty program to include third-party apps in the Play Store. An individual can impact thousands, and state-sponsored cyber activities can impact billions. So there is no status quo, just a massive technology industry that fights regulation no matter how many times things go pear-shaped. Even limited regulations are often met with significant pushback, to the point that recent “guidance” around medical devices is nonbinding. The scope of lobbying efforts from tech companies also increases daily.
Perhaps we can find the start of a solution by looking at the coordinated global response to other crises. HIV crossed from chimpanzees to humans around 1920 but did not get significant attention until the early 1980s, with the World Health Organization’s (WHO) global program launching in 1987. I compare the two not to either inflate the software issues or minimize the impact of HIV/AIDS, but to frame this conversation within an appropriate scale.
The WHO’s global program worked because it pulled in other activities — political, public, educational, and economic — in order to focus on three main areas: prevention, alleviating the personal and societal impact of HIV, and the mobilization of national and international response efforts.
Just as I don’t trust drug companies or banks to regulate themselves, I do not trust technologists to regulate themselves — and I am speaking as one. The rise of AI makes this an even more serious problem — instead of people writing code, organizations like Google are having AIs teach and write themselves.
I believe we need a regulatory framework here, one that can draw on the successes and learnings of our response to other global crises like HIV/AIDS. Here’s what it might entail:
1. An innovation program that specifically invests in innovative, scalable preventative technologies — from antivirus and malware protection for the consumer to broader enterprise technologies for businesses. This program should also include educational activities, as well as specifically leveraging low/no cost, open source solutions.
2. A program to alleviate the impacts of bad code — requiring common standards of communication to consumers, reporting, transparency, and distribution of information upon bad code events.
3. A multinational organization that can unify all efforts, ensuring national programs are up to date, working with private corporations and the public sector, and specifically coordinating the global response.
Clearly, this would be a modest set of first steps but could lay the groundwork for larger, more impactful results over time. It’s a conversation we must start having now. I surely don’t want to wait until the fifth decade of this pandemic, by which time we will have figured out how to count how many people have died from bad code and have no way of controlling the AIs that are writing most of it.