Access the full text.
Sign up today, get DeepDyve free for 14 days.
Crowdsourcing is an increasingly popular method for researchers in the social and behavioral sciences, including experimental philosophy, to recruit survey respondents. Crowdsourcing platforms, such as Amazon’s Mechanical Turk (MTurk), have been seen as a way to produce high quality survey data both quickly and cheaply. However, in the last few years, a number of authors have claimed that the low pay rates on MTurk are morally unacceptable. In this paper, I explore some of the methodological implications for online experimental philosophy research if, in fact, typical pay practices on MTurk are morally impermissible. I argue that the most straightforward solution to this apparent moral problem—paying survey respondents more and relying only on “high reputation” respondents—will likely increase the number of subjects who have previous experience with survey materials and thus are “non-naïve” with respect to those materials. I then discuss some likely effects that this increase in experimental non-naivete will have on some aspects of the “negative” program in experimental philosophy, focusing in particular on recent debates about philosophical expertise.
Review of Philosophy and Psychology – Springer Journals
Published: Dec 8, 2017
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.