PurposeThe developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. In contrast to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. In this study, an action-based ethical theory founded on the combinational aspects of deontological and teleological theories of ethics is used in the construction of an artificial moral agent (AMA). Design/methodology/approachThe decision results derived by the AMA are acquired via fuzzy logic interpretation of the relative values of the steady state simulations of the corresponding RBFCM. FindingsThrough the use of rule based fuzzy cognitive maps, the following paper illustrates the possibility of incorporating ethical components into machines, where LSA and RBFCMs can be used to model dynamic and complex situations, and to provide abilities in acquiring causal knowledge.Research limitations/implicationsThis approach is especially appropriate for data poor and uncertain situations common in ethics. Nonetheless, to ensure that a machine with an ethical component can function autonomously in the world, research in AI will need to further investigate the representation and determination of ethical principles, the incorporation of these ethical principles into a system’s decision procedure, ethical decision making with incomplete and uncertain knowledge, the explanation for decisions made using ethical principles, and the evaluation of systems that act based upon ethical principles.Practical implicationsTo date, the conducted research has contributed to a theoretical foundation for machine ethics through exploration of the rationale and the feasibility of adding an ethical dimension to machines. Further, the constructed artificial moral agent illustrates the possibility of utilizing an action-based ethical theory that provides guidance in ethical decision-making according to the precepts of its respective duties. The use of LSA illustrates their powerful capabilities in understanding text and their potential application as information retrieval systems in artificial moral agents. The use of cognitive maps provides an approach and a decision procedure for resolving conflicts between different duties. Originality/valueSuggesting that cognitive maps could be used in artificial moral agents as tools for meta-analysis, where comparisons regarding multiple ethical principles and duties can be examined and considered. With cognitive mapping, complex and abstract variables that cannot easily be measured but are important to decision-making can be modeled. This approach is especially appropriate for data poor and uncertain situations common in ethics.
Journal of Information, Communication and Ethics in Society – Emerald Publishing
Published: Aug 8, 2016