When should we formally regulate a technology?

photo by sw_reg_03 @pixabay

This is inspired by the first chapter of Lucy Suchman’s book Human-Machine Reconfigurations (2nd edition 2012). Suchman is an anthropologist who has worked in, and critically examines, the technology industry (having been a researcher at the massively influential Xerox Palo Alto Research Centre). I’ll not examine that work in detail here. This is some notes on ideas I’ve had relevant to ethical and legal issues emerging from big tech’s current obsession with AI. The main focus of the chapter is the question of how technologies present their purpose and operations to people, and how that shapes human-machine relations (with humans sometimes assuming that machines have human-like intentions, thoughts, and desires). The habitability effect, for example, occurs when a machine seems to do something surprisingly clever, and humans then infer that it is more generally intelligent and capable than it actually is. In safety critical situations (driving a car, flying an aeroplane, making policy decisions) that may lead to disaster.

My thoughts:

At some point in the late 20th Century machines reached a level of sophistication at which very few of their “users” could understand the detail of their operations. However, we benefit sufficiently from these machines that we are prepared to trust them. They present us with what we can call socially acceptable and trusted opaque processes. Some of these processes are trusted because they are formally regulated. Air travel has many of these, and is highly regulated, but there is always a tension between industry innovators (and the need to protect intellectual capital in those innovations) and regulators. Some other such processes are governed to some extent by informal convention, embedded into habits. There are a lot of these in software design and development. They are built into expected practice (with some variations between fields) and the technical frameworks that programmers depend on (languages, frameworks, libraries, integrated development environments, code management platforms). Again some innovators try to break out of these conventions (rightly or wrongly) seeking an innovative edge that disrupts convention for profit. But there are many risks associated, not the least of which is the difficulty faced in recruiting other developers to work in heretical ways – software development is rather like the world of the pirate (as described in the book Be More Pirate). Finally, there are processes that we trust assuming that some kind of order and regulation exists, even though there is no reasonable basis for that assumption. Socially acceptable and trusted opaque processes are either:

  • Regulated
  • Convention governed
  • Trusted but not assured

Let’s consider that last category. Trusted but not assured. Why do we do this? Laziness? And perhaps because regulated and convention based tech is so ubiquitous and reliable that when we see unregulated tech we unconsciously assume it is OK to use. There’s also an extension of the habitability effect: if a machine does something clever, and seems to be on our side, we unconsciously extrapolate greater intelligence and trustworthiness.

So what might be wrong with this? Some dimensions to consider:

  1. The technology might be unreliable.
  2. There might be serious immediate consequences when it goes wrong.
  3. It might have a disruptive impact on other parts of the system, resulting in harm (often longer term).
  4. The technology might enable otherwise difficult or impossible immoral or unlawful acts.
  5. We might assume that we know the purpose of the machine (to serve us in some way), but in fact there are hidden purposes (to exploit us for gain, which may be harmful).

We tend to focus on the second of these points when considering regulation. But increasingly we are thinking about the third point in relation to specific system issues (especially climate change) – for example, there is regulation aimed at coping with the impact of autopilot over time degrading pilot skills. The fourth point is regulated as part of crime prevention. But regulation seems less common for the fifth case, where bad intent is built-into the system, but hidden from view. We need to take all of these issues (and probably more) seriously.

Hang-on, we’ve been here before. GDPR law is designed to regulate digital information systems so as to prevent exactly these issues. It was introduced, perhaps just in time, after many years of un-regulated damage. We might argue that the rise of AI as the tech-industry’s imagined saviour is a response to that regulation – finding new ways to take un-examined risks and exploitations beyond state control. There are other motivations (massive investment in cloud computing failing to pay off as that tech becomes commoditised and as power costs rise, so they need to get us to use it in new ways). But really we need to regulate now, and perhaps deregulate as we understand the implications.

And that idea leads to another thought: if new tech has an impact (sometimes bad sometimes good) on mental, social, and economic health, why is it not governed by an ethical and legal framework similar to that used for medicine? But that would be hugely challenging, and perhaps too imposing considering where we are at today. Could we identify when technologies genuinely have a pharmaceutical-like impact on people? And then regulate?

It’s complicated, but we need to think about it.

Be the first to comment

Leave a Reply

Your email address will not be published.


*