Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Fast, Cheap, and Unethical? The Interplay of Morality and Methodology in Crowdsourced Survey Research

Fast, Cheap, and Unethical? The Interplay of Morality and Methodology in Crowdsourced Survey... Crowdsourcing is an increasingly popular method for researchers in the social and behavioral sciences, including experimental philosophy, to recruit survey respondents. Crowdsourcing platforms, such as Amazon’s Mechanical Turk (MTurk), have been seen as a way to produce high quality survey data both quickly and cheaply. However, in the last few years, a number of authors have claimed that the low pay rates on MTurk are morally unacceptable. In this paper, I explore some of the methodological implications for online experimental philosophy research if, in fact, typical pay practices on MTurk are morally impermissible. I argue that the most straightforward solution to this apparent moral problem—paying survey respondents more and relying only on “high reputation” respondents—will likely increase the number of subjects who have previous experience with survey materials and thus are “non-naïve” with respect to those materials. I then discuss some likely effects that this increase in experimental non-naivete will have on some aspects of the “negative” program in experimental philosophy, focusing in particular on recent debates about philosophical expertise. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Review of Philosophy and Psychology Springer Journals

Fast, Cheap, and Unethical? The Interplay of Morality and Methodology in Crowdsourced Survey Research

Review of Philosophy and Psychology , Volume 9 (2) – Dec 8, 2017

Loading next page...
 
/lp/springer_journal/fast-cheap-and-unethical-the-interplay-of-morality-and-methodology-in-ogw1sHsziC

References (61)

Publisher
Springer Journals
Copyright
Copyright © 2017 by Springer Science+Business Media B.V., part of Springer Nature
Subject
Philosophy; Philosophy of Mind; Cognitive Psychology; Neurosciences; Epistemology; Developmental Psychology; Philosophy of Science
ISSN
1878-5158
eISSN
1878-5166
DOI
10.1007/s13164-017-0374-z
Publisher site
See Article on Publisher Site

Abstract

Crowdsourcing is an increasingly popular method for researchers in the social and behavioral sciences, including experimental philosophy, to recruit survey respondents. Crowdsourcing platforms, such as Amazon’s Mechanical Turk (MTurk), have been seen as a way to produce high quality survey data both quickly and cheaply. However, in the last few years, a number of authors have claimed that the low pay rates on MTurk are morally unacceptable. In this paper, I explore some of the methodological implications for online experimental philosophy research if, in fact, typical pay practices on MTurk are morally impermissible. I argue that the most straightforward solution to this apparent moral problem—paying survey respondents more and relying only on “high reputation” respondents—will likely increase the number of subjects who have previous experience with survey materials and thus are “non-naïve” with respect to those materials. I then discuss some likely effects that this increase in experimental non-naivete will have on some aspects of the “negative” program in experimental philosophy, focusing in particular on recent debates about philosophical expertise.

Journal

Review of Philosophy and PsychologySpringer Journals

Published: Dec 8, 2017

There are no references for this article.