Business has pricked up its ears lately regarding ethics and artificial intelligence: News platforms are saturated with reports on automated weapons systems, bias in algorithmic decision making, and the surprising fact that humans are training machines to make their algorithmic determinations.
Corporate leaders, in turn, have responded with statements of good intention. Seng Yee Lau, a senior VP of Chinese technology group Tencent [subscription required to read article], has declared the importance of user privacy on social networks and the need for “tech for good.” Microsoft CEO Satya Nadella urges developers to consider the unintended consequences of their work, and to learn to recognize bias in their work.
These responses suggest that the ethics surrounding artificial intelligence can simply be managed in a traditional manner with good intentions and a purposeful effort to ‘do better.’
Yet data ethicists working at the intersection of technology, philosophy and social sciences suggest otherwise. While news makes it seem as if the interconnected topics of ethics and artificial intelligence are a recent issue, this combination is nothing new, with a substantial body of academic work dating back to at least the 1900’s.
In a recent effort to map the different areas of ethics research, researchers at the Oxford Internet Institute and Turing Institute found six distinct domains of inquiry. They include epistemic concerns, or difficulties clarifying how data is used to produce algorithmic conclusions, and normative concerns, or various ways in which applied algorithms could produce certain discriminatory or unethical decisions.
One theme that runs through the research is the idea that artificial intelligence produces specific and novel issues that not only introduce new ethical questions, but may require new ways of thinking about ethics as a whole. While these may not be among the day-to-day issues that confront a business leader today, the impact and scope of ethics-related issues is growing, and will likely play an expanded role in future business environments.
As artificial intelligence becomes more commonplace, greater specificities regarding ethical issues are likely to surface in the mainstream media. Early analysts of these issues have warned that we lack the technical and conceptual tools needed to understand and tame emerging issues such as the following:
The difficulty of oversight
Researchers repeatedly warn that algorithmic data can be inscrutable and opaque, even when there are efforts to be more transparent. This may be because it is obscured as proprietary data, but also because learning algorithms alter their mechanics as they learn.
Algorithms also process at a speed and scale that is beyond the realistic abilities of even the most dedicated forensic teams. As researchers Mike Annany and Kate Crawford point out,
System builders themselves are often unable to explain how a complex system works, which parts are essential for its operation, or how the ephemeral nature of computational representations are compatible with transparency laws.
The problem of agency
Researchers at the Oxford Internet Institute and Turing Institute observe that many efforts to determine responsibility or agency land on either humans or machines–but never a combination.
They are dissatisfied with the two extremes that currently exist: Either humans are able to blame algorithms when things go wrong, or artificial agents are not considered to have moral agency, and people (such as the designer of an algorithm) would be held to blame. Instead, they propose the need to understand that both humans and machines may be held responsible for an unfortunate event. This may be especially needed since humans cannot yet effectively intervene in an automated system.
The uniqueness of algorithmic decision making
Applying the logic of existing ethical rules and norms to the realm of artificial intelligence is unlikely to work because technology lacks an understanding of causality.
As Mittelstadt and Allo explain, algorithms defy the ability to understand causality because it is “not established prior to acting upon the evidence produced by the algorithm. In addition, the search for causal links is difficult, as correlations established in large, proprietary datasets are frequently not reproducible or falsifiable.”
In application, such correlations can serve to discriminate. For example, a correlation between income levels and debt repayment apparent in a large set of data will function as discrimination when applied proactively to a low-income loan applicant.
In a slightly different vein, it can be difficult to understand the causal reasoning of an algorithm, in the way that we can of a person (who can explain how they arrived at a particular decision).
But algorithms are created by teams, and can contain the compounded sum of the team’s biases, which may be extremely complex to unravel. As a result, “the rationale of an algorithm can… be incomprehensible to humans, rendering the legitimacy of decisions difficult to challenge” (Mittelstadt and Allo).
Ethics of artificial intelligence is not ethics as usual
These issues that have been peripheral to mainstream discourse are beginning to surface as machine learning and other algorithms start to become more prevalent and applicable to our institutions. They are not unaddressable, but as data ethicists have pointed out for several decades, they will require new ways of thinking and approaches to create new guidelines for what is fair, democratic and promotes human autonomy above all else.
Pingback: 5 Ways Technology Will Impact Workforce Diversity & Inclusion | Prescient