Models for a colorful reality?: a response to comments on Olsson et al.

Models for a colorful reality?: a response to comments on Olsson et al. In her commentary to our review (Olsson et al. 2018), Stuart-Fox (2018) asks “What do animals see” and continues that we cannot imagine, but attempt to model it. Price and Fialko (2018) begin their commentary saying: “Behavioral experiments are crucial to understanding animal vision.” We agree and add: if behavioral experiments are carefully designed, these can answer even complex questions about the visual world of animals. However, behavioral experiments even on a small number of specimens can take weeks or months, and thus can only be performed on a limited number of species, asking a limited range of questions. Tests are also performed in laboratory settings and thus not under the most ecologically realistic conditions. Finally, tests often separate single sensory modalities, even submodalities, testing either spatial resolution, color discrimination, or the sensitivity of an animal for achromatic contrasts. All of this is necessary to keep the test simple and thus allow for unambiguous conclusions. Even in carefully controlled experiments, results vary between individuals, with behavioral context, motivation and yet other factors. Thus, behavioral tests are painstaking and time-consuming and cannot answer all questions that we would like to pose. Here, well-informed models can help. It is the beauty of the receptor noise limited (RNL) model of color discrimination that is discussed in the review and commentaries, that it can predict absolute discrimination thresholds using a very limited number of parameters based on receptor physiology. As Osorio and Vorobyev (2018), who conceived this ingenious model 20 years ago, point out in their commentary, this model “provides an excellent basis for comparative and evolutionary studies.” The model can be used to better understand behavioral results and to make predictions that can be tested in further behavioral experiments. If behavioral tests indeed measure absolute discrimination thresholds, we would argue, contrary to Vasas et al. (2018), that model calculations do allow the estimation of the underlying receptor noise (Olsson et al. 2015, 2018). Yet, we fully agree with Vasas et al. 2018 in that electrophysiological measurements are crucial. Therefore, unfortunately, in times when the number of papers and citations are used to measure the quality of science, it is tempting to replace experiments with model calculations, instead of combining behavioral tests, physiology and models. Sadly, as Marshall (2018) points out, this can very easily lead to wrong conclusions, many of which have been published, which does not bring the field forward. Instead of developing alternative models, as suggested in the commentary by Price and Fialko (2018), we suggest a generally careful use of any model, and more efforts invested in behavioral and physiological experiments. Interestingly, all 5 commentaries mostly discuss the original, chromatic version of the RNL model. Our review, however, was initially inspired by our observation that the more recent adaptation of the RNL model for achromatic discrimination thresholds is often used without parameter validation. The table of physiologically or behaviorally determined real thresholds (Olsson et al. 2018) can be used to directly predict achromatic discrimination by animals. Our main point is to explain is how they can be used to inform the RNL model. And the commentary by Vasas et al (2018) nicely points out the important point that noise levels differ between chromatic and achromatic channels, in some species, but not others. We hope that the review and commentaries contribute to a more cautious use of model predictions in general, and inspire more researchers to perform controlled behavioral or electrophysiological experiments on their animals! REFERENCES Marshall J. 2018. Do not be distracted by pretty colors: a comment on Olsson et al. Behav Ecol . 29: 286– 287 Google Scholar CrossRef Search ADS   Olsson P, Lind O, Kelber A. 2015. Bird colour vision: behavioural thresholds reveal receptor noise. J Exp Biol . 218: 184– 193. Google Scholar CrossRef Search ADS PubMed  Olsson P, Lind O, Kelber A. 2018. Chromatic and achromatic vision: parameter choice and limitations for reliable model predictions. Behav Ecol . 29: 273– 282. Google Scholar CrossRef Search ADS   Osorio D, Vorobyev M. 2018. Principles and application of the receptor noise model of colour discrimination: a comment on Olsson, Lind and Kelber 2017. Behav Ecol . 29: 283 Google Scholar CrossRef Search ADS   Stuart-Fox D. 2018. Opening the ‘black box’ of modelling animal colour vision: a comment on Olsson, Lind and Kelber. Behav Ecol . 29: 284 Google Scholar CrossRef Search ADS   Price T, Fialko K. 2018. Receptor noise models: time to consider alternatives?: a comment on Olsson et al. Behav Ecol . 29: 284– 285 Google Scholar CrossRef Search ADS   Vasas V, Brebner JS, Chittka L. 2018. Colour discrimination is not just limited by photoreceptor noise: a comment on Olsson et al. Behav Ecol . 29: 285– 286 Google Scholar CrossRef Search ADS   © The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Behavioral Ecology Oxford University Press

Models for a colorful reality?: a response to comments on Olsson et al.

Loading next page...
 
/lp/ou_press/models-for-a-colorful-reality-a-response-to-comments-on-olsson-et-al-60xIPYIqLd
Publisher
Oxford University Press
Copyright
© The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
ISSN
1045-2249
eISSN
1465-7279
D.O.I.
10.1093/beheco/arx186
Publisher site
See Article on Publisher Site

Abstract

In her commentary to our review (Olsson et al. 2018), Stuart-Fox (2018) asks “What do animals see” and continues that we cannot imagine, but attempt to model it. Price and Fialko (2018) begin their commentary saying: “Behavioral experiments are crucial to understanding animal vision.” We agree and add: if behavioral experiments are carefully designed, these can answer even complex questions about the visual world of animals. However, behavioral experiments even on a small number of specimens can take weeks or months, and thus can only be performed on a limited number of species, asking a limited range of questions. Tests are also performed in laboratory settings and thus not under the most ecologically realistic conditions. Finally, tests often separate single sensory modalities, even submodalities, testing either spatial resolution, color discrimination, or the sensitivity of an animal for achromatic contrasts. All of this is necessary to keep the test simple and thus allow for unambiguous conclusions. Even in carefully controlled experiments, results vary between individuals, with behavioral context, motivation and yet other factors. Thus, behavioral tests are painstaking and time-consuming and cannot answer all questions that we would like to pose. Here, well-informed models can help. It is the beauty of the receptor noise limited (RNL) model of color discrimination that is discussed in the review and commentaries, that it can predict absolute discrimination thresholds using a very limited number of parameters based on receptor physiology. As Osorio and Vorobyev (2018), who conceived this ingenious model 20 years ago, point out in their commentary, this model “provides an excellent basis for comparative and evolutionary studies.” The model can be used to better understand behavioral results and to make predictions that can be tested in further behavioral experiments. If behavioral tests indeed measure absolute discrimination thresholds, we would argue, contrary to Vasas et al. (2018), that model calculations do allow the estimation of the underlying receptor noise (Olsson et al. 2015, 2018). Yet, we fully agree with Vasas et al. 2018 in that electrophysiological measurements are crucial. Therefore, unfortunately, in times when the number of papers and citations are used to measure the quality of science, it is tempting to replace experiments with model calculations, instead of combining behavioral tests, physiology and models. Sadly, as Marshall (2018) points out, this can very easily lead to wrong conclusions, many of which have been published, which does not bring the field forward. Instead of developing alternative models, as suggested in the commentary by Price and Fialko (2018), we suggest a generally careful use of any model, and more efforts invested in behavioral and physiological experiments. Interestingly, all 5 commentaries mostly discuss the original, chromatic version of the RNL model. Our review, however, was initially inspired by our observation that the more recent adaptation of the RNL model for achromatic discrimination thresholds is often used without parameter validation. The table of physiologically or behaviorally determined real thresholds (Olsson et al. 2018) can be used to directly predict achromatic discrimination by animals. Our main point is to explain is how they can be used to inform the RNL model. And the commentary by Vasas et al (2018) nicely points out the important point that noise levels differ between chromatic and achromatic channels, in some species, but not others. We hope that the review and commentaries contribute to a more cautious use of model predictions in general, and inspire more researchers to perform controlled behavioral or electrophysiological experiments on their animals! REFERENCES Marshall J. 2018. Do not be distracted by pretty colors: a comment on Olsson et al. Behav Ecol . 29: 286– 287 Google Scholar CrossRef Search ADS   Olsson P, Lind O, Kelber A. 2015. Bird colour vision: behavioural thresholds reveal receptor noise. J Exp Biol . 218: 184– 193. Google Scholar CrossRef Search ADS PubMed  Olsson P, Lind O, Kelber A. 2018. Chromatic and achromatic vision: parameter choice and limitations for reliable model predictions. Behav Ecol . 29: 273– 282. Google Scholar CrossRef Search ADS   Osorio D, Vorobyev M. 2018. Principles and application of the receptor noise model of colour discrimination: a comment on Olsson, Lind and Kelber 2017. Behav Ecol . 29: 283 Google Scholar CrossRef Search ADS   Stuart-Fox D. 2018. Opening the ‘black box’ of modelling animal colour vision: a comment on Olsson, Lind and Kelber. Behav Ecol . 29: 284 Google Scholar CrossRef Search ADS   Price T, Fialko K. 2018. Receptor noise models: time to consider alternatives?: a comment on Olsson et al. Behav Ecol . 29: 284– 285 Google Scholar CrossRef Search ADS   Vasas V, Brebner JS, Chittka L. 2018. Colour discrimination is not just limited by photoreceptor noise: a comment on Olsson et al. Behav Ecol . 29: 285– 286 Google Scholar CrossRef Search ADS   © The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

Journal

Behavioral EcologyOxford University Press

Published: Mar 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off