08 October 2020
As artificial intelligence advances, how do we regulate the development of AI systems to ensure that these systems behave in an ethical manner towards all humans? In this article, I explore the intersection of ethics, artificial intelligence, and legal regulation to highlight challenges and consider possible regulatory approaches as we race towards an uncertain future in which AI plays an increasingly important role.
Specifically, I make two observations and then discuss possible regulatory approaches to managing bad outcomes. I made these points in brief at TechLaw.Fest 2020’s panel on “Applying Ethical Principles for Artificial Intelligence and Autonomous Systems in Regulatory Reform” and have elaborated on them here.
First, let me start by giving some context.
Artificial Intelligence (“AI”), with its great potential, has also been a cause for concern. From iconic sci-fi movies like the Terminator, to billionaire technologists like Elon Musk, to Nobel-prize winning physicists like Stephen Hawking, warnings of autonomous hyperintelligent AIs that evolve beyond our control abound.
For a long time, no action was needed. AI simply wasn’t “smart enough”, and all the markers we used to evaluate machine intelligence indicated that humans were still the kings of the hill.
Then in 1997, IBM’s Deep Blue AI defeated Russian chess grandmaster and reigning world champion, Gary Kasparov, under tournament conditions – a hugely symbolic moment representing the first defeat of one of humanity’s great intellectual champions, to a machine.
In 2011, IBM’s Watson AI defeated two former champions at the game show Jeopardy, demonstrating that it could analyze subtle meaning, irony, riddles, and other complex concepts in which machines traditionally had not excelled.
In 2016, the Google DeepMind AlphaGo AI won a 5-game match against 18-time world champion Go master, Lee Sedol – one of the strongest Go players in history. Go is a strategy game orders of magnitude more complex than chess, and an AI victory was thought to be 5-10 years away.
In 2017, the Liberatus AI defeated four of the world’s top poker players in a 20-day tournament - notable because poker requires players to make decisions based on incomplete information.
In 2018, Google’s Duplex AI beat the Turing Test. Duplex held telephone conversations with humans to help them make appointments, and the humans could not tell that Duplex was a machine.
The breakthroughs continue in natural language processing, visual recognition, understanding of long-term consequences, the three body problem in classical mechanics, and more, So far, the advances have revealed that AI is extremely good (or undefeatable) at specific things, but has not yet become a more generalized form of intelligence. Undoubtedly, that is the direction in which technological advancements are headed.
In response, governments, international organizations, and private industry have begun publishing guidelines and ethical principles on how AI systems should be designed, in the hopes that advancements in AI help, rather than hurt, humanity.
A few examples of this include:
(collectively, the “Guidelines”)
Observation #1: Machines are inherently worse ethical agents than humans
Having set the scene, let me now make an observation about AI and ethics – machines are probably inherently worse ethical agents than humans.
We may think of machines as impartial and therefore potentially “fairer” ethical agents, but there are 2 concrete reasons why I believe machines are worse ethical agents than humans.
First, inherent bias.
Bias is a major concern because one of the advantages of machines is they generate outcomes – whether good or bad – efficiently and at-scale. Conway’s Law states that organizations design systems reflecting their communication structures. Similarly, developers design software that reflects the developer’s own values, which in turn are influenced by the developer's society and culture. So while developers may aim to exclude bias from datasets used to train AI, ultimately there may be some key protocols inherent in the design of an AI system that reflect the inherent biases of its developers, resulting in AI systems never being truly impartial.
Second, machines lack the ability to make virtue-based decisions.
Humans use a variety of ethical approaches to decision-making. Four major schools of thought dominate the contemporary ethics landscape, and we (individuals, communities, and governments) make judgment calls about which approach ought to take precedence in a particular instance, as influenced by our own inherent biases and socio-cultural contexts.
Utilitarianism, the most well-known form of consequentialist thought, takes the view that we ought to provide the maximum utility (good) for the most people. An example of this in law is the enactment of laws that curtail personal freedoms to prevent the spread of COVID-19. In other words, certain losses are morally acceptable if they result in the “greater good” – the maximum utility for all. You can also imagine how this might translate to a program – you assign positive values to good outcomes and negative values to bad outcomes, and optimize for the maximum value across the maximum number of people.
Deontological ethics takes the view that the ethical course of action is to follow ethical rules and duties. An example is Immanuel Kant’s Categorial Imperative, “act by that maxim which you can at the same time will as a universal law” – in other words, something is moral only if you would also permit everyone else to do it. This approach often manifests itself in laws that protect individual rights (e.g. it is immoral to curtail someone else’s right to free speech because I would object if someone tried to curtail my own right to free speech). Such rule-based decision making could also lend itself well to programming if you could translate it to a programming protocol that takes precedence over others.
Natural Law ethics takes the view that people have an inherent (natural) and universal sense of morality. Sections 377 and 377A of Singapore’s Penal Code 1985 Rev Ed made reference to “Unnatural Offences” of carnal intercourse or gross indecency. Singapore’s current Penal Code no longer uses the term “Unnatural Offences” but criminalizes the specific acts. Where specific ethical rules and duties can be identified, these rules may be programmed similarly to how deontological rules are treated.
Finally, Virtue ethics takes the view that the ethical course of action is to act virtuously, such as with empathy, or duty, or gratitude. In other words, the morality of the action is not dependent on the outcome (in contrast to utilitarianism and other forms of consequentialism), but rather the mental state of the actor. We see this manifest in the law through the concept of mens rea, where criminal intent is required for culpability, as well as in Good Samaritan laws that protect persons from civil or criminal liability where the person acted virtuously to provide reasonable assistance in an emergency, even if it results in the injury or death of another. Unlike the other ethical approaches, it is not so clear how to approach this from a programming perspective. How do you code for empathy or duty?
So machines may be inherently hamstrung in their ability to make “ethical” decisions to the degree that humans do.
Observation #2: AI decisions may be inherently un-explainable
The concept of “explainability” of AI decisions is described in a variety of ways:
UNI Global Union 10 Principles for Ethical AI: Equip AI Systems With an “Ethical Black Box” … the ethical black box would record all decisions, its bases for decision-making, movements, and sensory data for its robot host. The data provided by the black box could also assist robots in explaining their actions in language human users can understand, fostering better relationships and improving the user experience.
IBM Everyday Ethics for Artificial Intelligence: Explainability - AI should be designed for humans to easily perceive, detect, and understand its decision process.
Singapore Model Artificial Intelligence Governance Framework: Explainability - Explainability is achieved by explaining how deployed AI models’ algorithms function and/or how the decision-making process incorporates model predictions. The purpose of being able to explain predictions made by AI is to build understanding and trust. An algorithm deployed in an AI solution is said to be explainable if how it functions and how it arrives at a particular prediction can be explained. When an algorithm cannot be explained, understanding and trust can still be built by explaining how predictions play a role in the decision-making process.
The common thread through these various descriptions is that they revolve around transparency in the decision-making process – understanding the “why” behind a decision. However, actual explainability ultimately may not be possible.
Normal programs use algorithms, which are sequences of functions that are applied to solve problems. A simple algorithm may comprise only 1 function. The function is applied to an input to create an output. So in each function, you are solving for only 1 problem, the output.
In a machine learning program, you need to solve for 2 problems, the algorithm (i.e. the function or sequence of functions), and the output.
To first develop the algorithm you start with known inputs and outputs, and approximate the algorithm to measure how closely the actual output matches the intended (known) output. The machine then iterates countless times, refining the algorithm until the actual output is as close as possible to the intended output. Having found an algorithm that seems to work consistently, you can then apply it to solve for unknown outputs and most of the time it should work correctly.
The first observation here is to note that it is the machine that creates the various iterations of the algorithm, not a developer who uses his/her own reasoning to modify the algorithm in the hopes of achieving a particular output.
The second observation here is that the iterative process relies on inductive rather than deductive logic. “Explainability” presumes there is a purpose and reason to why an algorithm is modified a certain way. But there may be no reason beyond “it just works better” – it produces the intended output more often, and we should be careful not to impute reasoning upon the machine where there is none.
So true “explainability”, in terms of being able to understand the “why” behind a decision, may simply not be feasible. We may be able to see the algorithm that was applied to reach a decision - the "how", but the "why" is likely to be both computationally complex and practically unhelpful in illuminating our understanding of the AI's reasoning (if any).
Alternatives to explainability
If true “explainability” is not feasible, what is the next best alternative? This remains to be seen, but some interesting alternative approaches have emerged.
First, instead of relying on “explainability” as an ethical principle, some of the Guidelines above side-step this issue by emphasizing concepts like “transparency”, “responsible disclosure”, and “accountability”. It remains unclear how these requirements differ from “explainability” in practice, but one possibility is to define a concept like “accountability” in a way that removes the need for explainability.
Google describes its principle of AI accountability as follows:
4. Be accountable to people.
We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.
During a panel at the TechLaw.Fest 2020 conference, Mr Manish Gupta, the Director of Google Research India, provided the following example: Where a decision has a binary outcome, either positive or negative, that affects a person, no explanation is required if the outcome is positive. If the outcome is negative, such as rejecting a loan application, the AI will tell the user what is required to get a positive outcome.
This approach does not provide an explanation of why the AI produced a negative outcome, instead focusing on providing an explanation of how to obtain a positive outcome. It also puts an anthropomorphic spin on the AI and makes its decision appear more human-like, while expressing itself in an easily understandable way.
Another possibility is to have specific ethical protocols which are programmed deterministically by humans rather than through machine learning, can be separately examined, and which override the ordinary application of the AI’s algorithms.
The challenge with this is that, as mentioned previously, there is no one dominant school of thought, and very often a “right” or "morally defensible" decision is just a matter of a small degree of prioritization of one ethical approach over another, and alternative prioritizations may be equally valid.
The Trolley Dilemma is a popular starting point for illustrating this. Let us imagine a version of the Trolley Dilemma in which an AI-controlled trolley’s brakes have failed. Ahead of the trolley lies a fork in the rails. On one fork, a madman has left 5 people tied up across the rails, on the other fork he has left just 1 person tied across the rails. If the AI does nothing, it will run across the 5 persons, killing them. If the AI takes positive action to avoid this, it will switch rail paths and run over just 1 person, killing him.
Let us then imagine that the AI trolley driver has been programmed with a simple rule: robots must not take any action that will harm a human. In such a scenario, the AI may not take action because if it takes action, it will kill the 1 person tied across the rails. It therefore chooses inaction and continues ahead, killing 5 persons. “Explainability” in this case would involve the developers being able to demonstrate that notwithstanding the AI’s ordinary algorithms, an overriding ethical protocol to avoid taking actions that harm humans resulted in the AI’s inaction.
Let us imagine an alternative scenario where there are 2 rules - the deontological rule above, and a utilitarian rule that takes precedence: (1) robots must not take any action that will harm a human; and (2) notwithstanding (1), a robot may harm or allow harm to come to a human if such action prevents more harm to more humans. In such a scenario, the AI may take positive action to sacrifice the 1 person tied across the rails to preserve the safety of the 5 other persons. “Explainability” again would involve the developers being able to demonstrate the existence and operation of this ethical protocol.
In reality, there are other solutions that may conveniently skirt the need for ethical decision-making by the AI trolley driver – for example, a human could be designated to make this decision. But there remains the possibility that there may be scenarios in which humans are not around or available to make those decisions (e.g. if a human is incapacitated), or where a machine might even be expected to intervene to prevent human error (e.g. a vehicle having an automatic braking system that intervenes to prevent crashes caused by careless human drivers).
Governance and managing bad outcomes
Ethical guidelines and principles are helpful in the sense that they articulate and crystalize what we hope to achieve in governing and regulating AI systems, but will need to be translated to laws and regulations to have legal force.
Even where laws and regulations are enacted, we can reasonably conclude, having observed that machines are imperfect ethical agents and that explaining their decisions may be inherently difficult and/or unhelpful, that inexplicable unforeseen bad outcomes will almost certainly happen from time to time.
Therefore, the ability to manage bad outcomes, particularly at-scale, takes on much greater importance now than it has in the past. In this regard, I share two examples of regulatory approaches that could be employed to try and predict and/or manage bad outcomes.
First, mandatory sandbox testing using extremely large data sets of real anonymized data to test for unanticipated outcomes, and particularly behaviors and biases that emerge after extended use as the machine’s algorithms evolve.
We can also look to human clinical trials for a structure on how such testing can be conducted – 3 phases of mandatory sandbox testing, each phase using an increasingly large data-set, and the third phase using a global data-set, with an optional fourth phase of ongoing testing after the product goes-to-market.
Having said this, I recognize the practical challenge of obtaining these data-sets, whether from private industry or other governments, as well as the dangers of entrusting large data-sets (even if anonymized) to private industry and governments. In this regard, privacy-oriented blockchains could be a potential partial solution in that they may give individuals the ability to revoke access to their data at any time, but this still would not prevent private industry and governments from potentially abusing the aggregated data.
Second, we could employ a mandatory licensing regime that facilitates compensating for bad outcomes. In this regime, license fees could be pooled, and if a machine causes a bad outcome, and fault or liability simply cannot be attributed, the pool of funds may be used in appropriate situations to compensate the injured party.
* Note: I have used "machine learning" and "AI" interchangeably in this article for ease of reading, but to be accurate, the two are discrete. "Machine learning" is the study of computer algorithms that allows programs to automatically improve through experience. "Artificial Intelligence" is a much broader term involving the science, engineering, and anthropocentric design of a program to mimic what humans consider “intelligent” behavior. Machine learning is not a strict prerequisite, although it is commonly found in AI programs.
Associate Director, BR Law Corporation
Post date. Edit this to change the date post was posted. Does not show up on published site. 8/10/2020
The materials in these articles have been prepared for general informational purposes only and are not legal advice or a substitute for legal counsel. If you require legal advice for your particular circumstances, please consult a suitably qualified legal counsel. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. You should not rely or act upon this information without seeking professional counsel. Whilst we endeavour to ensure that the information in these articles is correct, no warranty, express or implied, is given as to its accuracy and we do not accept any liability for error or omission. The authors of the articles are or were employees of BR Law Corporation at the time of publication, but may no longer be, now or in the future, in the employ of the firm.
Subscribe to our Newsletter
Subscribe to our quarterly newsetter to keep up to date with a wealth of insights from the BR Law, BR Family Assets and BR Corporate services team.