Sunteți pe pagina 1din 2

Machine Ethics developing a fully autonomous artificial moral agent

Jeffrey White
October, 2017

Summary:

Machine ethics involves first understanding human morality in a way that may in
principle be engineered into an artificial agent, in a way that machines may be
held in evaluation in traditional human terms, and secondly involves adapting
such a schema in the design and construction of artificial moral agents given
adequate technology. It is not to be confused with robot ethics, which concerns
the effects of semi-autonomous and robotic agents on human beings and their
society, for example worker displacement due to robotic automation of the
workplace and the broader economic consequences thereof, or safety and
liability issues related self-driving automobiles. The distinction between the two,
robot and machine ethics, can be drawn roughly along the lines of autonomy,
with machine ethics focused on developing genuinely autonomous agents and
robot ethics focused on what is much more limited.

Traditionally, machine autonomy and moral agency has been approached from
the outside-in, with researchers focused on how to program digital computers
with rules and principles derived from human experience and rendered in
purely symbolic terms in some sort of logical framework. The fragility of such
systems is well-known, and the subject of popular adaptations for example in
Asimovs famous four laws of robotics. However, this has not stopped
researchers from pursuing exactly this tact. More than fifty years ago, Hubert
Dreyfus famously analyzed the problem, as researchers tried to apply methods
successful in relatively simple, formal contexts to increasingly complex, informal
contexts, only to be met with disappointment. And, he was able to bring this
assay to bear over the generations of AI developed since good old fashioned
artificial intelligence, expert systems informed by millions of individual explicit
facts, and even relatively recent efforts in dynamical systems inspired neural
network models. All have aspired to what is now discussed under the heading of
artificial general intelligence and have failed and will fail for the same
reasons. All lack authentic subjective grounds for moral agency. None are
genuinely autonomous.

This brings us to what I feel is a fourth distinct generation of AI and with it an era
ripe for an inside-out rather than an outside-in approach to morality in an
artificial agent. The bulk of this talk concerns this approach and with it an
appreciation of the research platform that facilitates its pursuit. First, we will
review the inherited (Western) view on moral agency as articulated by Aristotle
more than two thousand years ago and then as transformed by Kant for an
increasingly liberal Christian Europe more than two hundred years ago. These
views deeply influenced the framers of the US Constitution, for example, and
continue to fundamentally shape ethical and moral discourse, so they remain
important in understanding artificial agents in terms equivalent with human
beings today. At root of this view is a general model of agency within the
constraints of a natural world with others situated in the same terms. We will
isolate this basic model of agency, and explain how Kants famous categorical
imperative emerges through its normal exercise rather than being programmed
into a machine as a primitive principle externally and without authentic
subjective grounds. Finally, we will specify what is required of an artificial agent
that it might embody such a moral capacity, and speculate briefly what it might
mean for us to live amongst fully autonomous artificial agents when we finally do
develop an essentially moral machine.

S-ar putea să vă placă și