PurposeThe developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. In contrast to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. In this study, an action-based ethical theory founded on the combinational aspects of deontological and teleological theories of ethics is used in the construction of an artificial moral agent (AMA). Design/methodology/approachThe decision results derived by the AMA are acquired via fuzzy logic interpretation of the relative values of the steady state simulations of the corresponding RBFCM. FindingsThrough the use of rule based fuzzy cognitive maps, the following paper illustrates the possibility of incorporating ethical components into machines, where LSA and RBFCMs can be used to model dynamic and complex situations, and to provide abilities in acquiring causal knowledge.Research limitations/implicationsThis approach is especially appropriate for data poor and uncertain situations common in ethics. Nonetheless, to ensure that a machine with an ethical component can function autonomously in the world, research in AI will need to further investigate the representation and determination of ethical principles, the incorporation of these ethical principles into a system’s decision procedure, ethical decision making with incomplete and uncertain knowledge, the explanation for decisions made using ethical principles, and the evaluation of systems that act based upon ethical principles.Practical implicationsTo date, the conducted research has contributed to a theoretical foundation for machine ethics through exploration of the rationale and the feasibility of adding an ethical dimension to machines. Further, the constructed artificial moral agent illustrates the possibility of utilizing an action-based ethical theory that provides guidance in ethical decision-making according to the precepts of its respective duties. The use of LSA illustrates their powerful capabilities in understanding text and their potential application as information retrieval systems in artificial moral agents. The use of cognitive maps provides an approach and a decision procedure for resolving conflicts between different duties. Originality/valueSuggesting that cognitive maps could be used in artificial moral agents as tools for meta-analysis, where comparisons regarding multiple ethical principles and duties can be examined and considered. With cognitive mapping, complex and abstract variables that cannot easily be measured but are important to decision-making can be modeled. This approach is especially appropriate for data poor and uncertain situations common in ethics.
Journal of Information, Communication and Ethics in Society – Emerald Publishing
Published: Aug 8, 2016
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera