Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.

Learn More →

Artificial Moral Agents Within an Ethos of AI4SG

Artificial Moral Agents Within an Ethos of AI4SG As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated into society in a manner that would maximise the good for as many people as possible, whilst minimising the bad. One of the most urgent societal expectations of artificial agents is the need for them to behave in a manner that is morally relevant, i.e. to become artificial moral agents (AMAs). In this article, I will argue that exemplarism, an ethical theory based on virtue ethics, can be employed in the building of computationally rational AMAs with weak machine ethics. I further argue that three features of exemplarism, namely grounding in moral exemplars, meeting community expectations and practical simplicity, are crucial to its uniqueness and suitability for application in building AMAs that fit the ethos of AI4SG. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Philosophy & Technology Springer Journals

Artificial Moral Agents Within an Ethos of AI4SG

Philosophy & Technology , Volume OnlineFirst – Apr 28, 2020

Loading next page...
 
/lp/springer-journals/artificial-moral-agents-within-an-ethos-of-ai4sg-lI1FP3j1Ok
Publisher
Springer Journals
Copyright
Copyright © Springer Nature B.V. 2020
ISSN
2210-5433
eISSN
2210-5441
DOI
10.1007/s13347-020-00400-z
Publisher site
See Article on Publisher Site

Abstract

As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated into society in a manner that would maximise the good for as many people as possible, whilst minimising the bad. One of the most urgent societal expectations of artificial agents is the need for them to behave in a manner that is morally relevant, i.e. to become artificial moral agents (AMAs). In this article, I will argue that exemplarism, an ethical theory based on virtue ethics, can be employed in the building of computationally rational AMAs with weak machine ethics. I further argue that three features of exemplarism, namely grounding in moral exemplars, meeting community expectations and practical simplicity, are crucial to its uniqueness and suitability for application in building AMAs that fit the ethos of AI4SG.

Journal

Philosophy & TechnologySpringer Journals

Published: Apr 28, 2020

There are no references for this article.