When it comes to the question of what kind of moral claim an intelligent or autonomous machine might have, one way to answer this is by way of comparison with humans: Is there a fundamental difference between humans and other entities? If so, on what basis, and what are the implications for science and ethics? This question is inherently imprecise, however, because it presupposes that we can readily determine what it means for two types of entities to be sufficiently different—what I will refer to as being “discontinuous”. In this paper, I will sketch a formal characterization of what it means for types of entities to be unique with regard to each other. This expands upon Bruce Mazlish’s initial formulation of what he terms a continuity between humans and machines, Alan Turing’s epistemological approach to the question of machine intelligence, and Sigmund Freud’s notion of scientific revolutions dealing blows to the self-esteem of mankind. I will discuss on what basis we should regard entities as (dis-)continuous, the corresponding moral and scientific implications, as well as an important difference between what I term downgrading and upgrading continuities—a dramatic difference in how two previously discontinuous types of entities might become continuous. All of this will be phrased in terms of which scientific levels of explanation we need to presuppose, in principle or in practice, when we seek to explain a given type of entity. The ultimate purpose is to provide a framework that defines which questions we need to ask if we argue that two types of entities ought (not) to be explained (hence treated) in the same manner, as well as what it takes to reconsider scientific and ethical hierarchies imposed on the natural and artificial world.
Philosophy & Technology – Springer Journals
Published: Sep 20, 2013