--Testing wiki as text editor for essay writing--

Challenges Facing an Ethical AI

Eric S. Kvale
Bemidji State University

The concept of a general ethical Artificial Intelligence (AI) has begun to garner a lot of attention in the public and academic spheres, but the challenges facing the creation of a general ethical AI are considerable. Challenges include; difficulty in defining computational terms, human definitions, and human values, as well as, agreeing on what ethical systems or set of values that an AI should have. Further anticipated challenges include the enormity of scope and general complexity of programming a scope of that size.

Problems with Terms
A major hurdle when confronting the prospect of creating an AI capable of handling a large variety of moral situations comes in defining terms we often take for granted. These terms are immediately understood by humans but the challenges faced in defining these terms in language a computer can understand proves difficult. For example, how would one go about programming the concept of human life? If we use intuitive measures, like heartbeat, respiration, and somehow define species; how do we tell a robot to apply CPR, yet rule out trying to bring back to life all historically deceased people? How should “brain-dead” or those in a persistent vegetative state be defined with respect to machine ethics (Why Asimov's Laws of Robotics Don't Work, 2015).
Additional considerations include giving weight to non-human animals. Do we give value to higher functioning mammals? Should we place a higher value on species with a high risk of extinction? Is it ethical to only give a value to human life? If we do place a value on separate animal species how do we define in a programming language what factors to consider?
Other problems that may arise when defining terms obvious to humans include a concept like murder. A definition would need to exist to differentiate the various ways in which a human life is taken. What constitutes as self-defense, in what ways is a wartime causality different from the cold-blooded murder of a spouse. How can we program concepts like euthanasia and suicide? Moreover, humans cannot even agree on when life begins and ends. Is it before or after birth? Does the value in human life appear gradually as it develops or instantly through some unknown process?
These definitional concerns are just the tip of the iceberg, similar issues will arise from defining the complexities that exist between marital partners, family ties, and friendships. All of which will need to be tackled and handled in a way where each factor is considered in relation to the others. For moral decisions, cannot be made without contextualizing the entirety of the situation.

Problems with Ethical Systems
Humans continue to struggle to reach an acceptable ethical system. (Brundage, 2013). Commonly used systems among modern philosophers include Utilitarianism and Kant’s Categorical Imperative. However, Law Enforcement has a different system they employ, large numbers of people employ at least, in part, ethical systems derived from religious principles. Additionally, among the systems mentioned a large variety of conflicting ideology exists.
Religious and cultural variety are easy to see, but not so easy to understand how, or if machine ethics should factor in these differences. One cannot entirely find agreement under a more detached philosophical or scientific banner either, as disagreement continues amongst philosophers to this day. Further complications emerge when considering the real-world limitations the enacting of an ethical system might include; such as limited food and monetary resources.
One attempt to resolve these issues is to look for universal values that exist within all ethical systems. Commonly held values most cultures are unwarranted taking of a life is agreed upon to be amoral. As well as incest, child abuse or an unjust taking of another’s liberty. These universal values, however, are not so easy find once you start defining each stance with specificity. Questions like, "what constitutes an ethical taking of life?" vary wildly. The age of consent varies, gender issues arise, and problems with race and religion emerge, complicating what on the surface appeared to be an agreed upon value.

Problems with Utilitarianism
At first glance, Utilitarianism seems to lend itself nicely to a generalized ethical AI, the idea of “the greatest good for the greatest number” makes ascribing values to actions a bit easier, and in theory, one could program a computer based on the amount of good each action did. However, uncomfortable scenarios can be seen to emerge rather quickly. Take, for example, the situation where the majority of the world feels they would be better off without a certain minority population. Could the AI reason that a correct course of action might be the eradication of that group. (Why Asimov's Laws of Robotics Don't Work, 2015)
It has been suggested that individuals that come from violent backgrounds or are exposed to violence in anyway are more likely to commit violent acts themselves. Could a Utilitarian AI reason that the world, as a whole is better off if those exposed to violence, (though innocent) were separated from the rest of humankind? These issues are not new problems for Utilitarianism, but they are given a fresh life when considered in the context of a general AI.
Additional, considerations when implementing Utilitarian systems in machine ethics, include factoring in time. Two major difficulties emerge. Firstly, that a computational system will not be able to compute all of the potential utilitarian (or another ethical system for that matter) factors involved a decision in a reasonable amount of time. Such as how to avoid this hyperbolic scenario from a Hitchhiker’s Guide to the Galaxy.
"There is an answer?" said Fook with breathless excitement.
"Yes," said Deep Thought. "Life, the Universe, and Everything. There is an answer. But, I'll have to think about it."
Fook glanced impatiently at his watch.
“How long?” he said.
“Seven and a half million years,” said Deep Thought.
Lunkwill and Fook blinked at each other.
“Seven and a half million years...!” they cried in chorus.” (Adams, 1981)

While seven and a half million years, might be a bit comical creating an AI that makes complex ethical decisions in human time, is a realistic challenge.
The other potential danger is an AI looking so far into the future, that all ethical decisions are equalized. For example, what value can a decision that is being made now have, if the entire universe is doomed to a heat death in the distant future?

While the challenges facing a general ethical AI are significant, and the potential for unintended ethical consequences are frightening, the advantages to a working general ethical AI would be tremendous. The possibility of removing cultural and individual bias would be of great benefit to our society and species. Personal prejudice and racism could be eliminated, resource allocation and wealth distribution could be managed in a fundamentally different way. Additionally, the value added by involving the memory, and analytical power of advanced computers in complex global ethical scenarios holds great potential.

The challenges facing a generalized AI at this juncture make the prospect unfeasible. The largest barrier being, that humankind still struggles itself with making ethical decisions, and unlike other applications of AI and computers in general, there is no way to check to see if you have arrived the correct answer. Potential solutions to these challenges could include using a Machine Learning program to define the complexities of ethical systems, creating an AI that interpreted and learned to understand the ethical world in a way more similar to that of the human mind. While these advancements are hard to imagine at the current time, the watershed possibility is not unfathomable. Even without a breakthrough technology continued efforts in the creation of a general AI could still help give a greater understanding to our own existing ethical structures and how we can translate these structures into a better and more just society, with or without the help of a generalized AI.

Anderson, M. & Anderson, S. L. (2007). Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, Pages From 15 - To 26.
Brundage, M. (2013). Limitations and Risks of Machine Ethics. Risks of Artificial Intelligence, 87-114. doi:10.1201/b19187-6¬
Adams, D. (1981). The universe of Douglas Adams: The hitchhiker's guide to the galaxy, the restaurant at the end of the universe, life, the universe and everything. New York: Pocket Books.
C. (2015, November 06). Why Asimov's Laws of Robotics Don't Work - Computerphile. Retrieved April 01, 2017, from https://www.youtube.com/watch?v=7PKx3kS7f4A


There are no comments on this page.
Valid XHTML :: Valid CSS: :: Powered by WikkaWiki