AI & SOCIETY
Should I kill or rather not?
Luis Moniz Pereira
Received: 15 May 2018 / Accepted: 26 May 2018
© Springer-Verlag London Ltd., part of Springer Nature 2018
Robots are already among us: They build our cars and
vacuum our apartments. Why do these machines need a
sense of right and wrong?
Their tasks will change: they will work a lot closer with
humans, and they will have more autonomy, for example as
caretakers for the elderly. Imagine a robot in a nursing home.
It is helping elderly people with eating and grooming, and
it is handing out medicine. One morning a resident asks the
robot for painkillers, because he has a terrible headache. The
robot is allowed to hand out pills only with the approval of a
doctor. But none of the doctors are available. Will the robot
let the resident suﬀer, or will it make an exception? Its deci-
sion depends on the way we program it.
The thought that a robot will take such decisions for us
is a little eerie.
I think the problem is not that machines will take over. I
think it is that we are giving too much power to simplistic
machines: Machines that take decisions based on statistics.
They can neither consider the individual circumstances of
each case, nor can they justify their actions.
You say that to teach a robot morality, we need to know
what we as humans consider right or wrong. How much
do we know about our own moral principles?
Neither computer scientists nor sociologists know enough
about human morality. Nobody does. One thing seems cer-
tain: Morality evolved. We are a gregarious species, so we
need rules for living together. We are born with the ability to
learn moral behaviour—much like we are born with the abil-
ity to learn a language. 95 per cent of all moral decisions are
taken by reﬂex. It is only in complex situations that we need
to think things through or even suppress our ﬁrst impulse.
So we are deciding intuitively, without knowing why?
At least, most people have diﬃculties explaining why they
decided one way or the other. And that’s a problem: We
don’t know about the basics of morality to program it. Even
ethicists disagree on how to act in certain moral dilemmas.
They disagree on what constitutes good moral reasoning.
There are diﬀerent schools of thought.
Would not a machine be perfectly suited to calculate
which decision yields the greatest beneﬁt for all involved?
But how would you do it? And which information is needed?
Imagine a situation where you need to decide who lives and
who dies. Is it better to save a doctor, who in turn might go
on to save many more lives? Or do you save a young person
who has his whole life ahead of him? There is no simple,
universal morality that everyone can agree on.
Sounds like it is almost a hopeless mission to program a
machine with ﬁxed moral rules.
We are still at the very beginning. We should start with
clearly deﬁned norms for speciﬁc settings: for hospitals, for
This is a joint synopsis, in English, of 3 interviews on the subject
of “Machine Ethics” given by the author in 2018. Nora Saager, a
science journalist for the German “P. M. Magazine”, conducted
one of them. The original, in German, came out in its February
2018 issue. Another was conducted by journalist Pedro Lucas for
the feature “Um Café Com…” of “Men’s Health” magazine. The
original, in Portuguese, was published in its January 2018 issue.
Journalist Virgílio Azevedo, for his regular feature “O Futuro
do Futuro” in the weekly “Expresso”, conducted the third. The
original, in Portuguese, was published on 28 April 2018. I thank
the 3 journalists for permission to utilize my fusion and English
translation of selected parts of the above-mentioned material.
* Luis Moniz Pereira
Professor Emeritus of Departamento de Informática,
Universidade Nova de Lisboa, Caparica, Portugal
NOVA-LINCS research centre, Departamento de
Informática, Universidade Nova de Lisboa, Caparica,