Do not be distracted by pretty colors: a comment on Olsson et al.

Do not be distracted by pretty colors: a comment on Olsson et al. The review by Olsson and colleagues on chromatic and achromatic models is a very useful read for the many behavioral ecologists and neuroethologists confused on how to do this, for those that may want to improve what they are already doing and for those at the point of deciding whether or not to do it. That said, it is not a guide on how to do it, but more, as the title states, a guide on pitfalls and the “limitations” of the currently favored model, the Vorbyev/Osorio receptor noise limited model (V/O RNL) (Vorobyev and Osorio 1998). Perhaps most importantly, Olsson et al. (2018) repeatedly note that some sort of behavioral calibration or verification is, if not essential, at least very desirable. This paper is by no means an easy read and will certainly be of most use to those who have already had a go at using the V/O RNL model. I personally hope it will be very useful to those who have had a go and reached the wrong conclusion because there are many out there that have and have nonetheless got the results published. One of the caveats, in fact not mentioned until the end of the review, is that this model is not suited for examining large just noticeable differences (jnds) but operates best around threshold jnd of 1–3 for example. This, along with other considerations also covered in the review, is often ignored and it has become difficult to decide where the right conclusion for the wrong reason or just the wrong conclusion has been drawn. This cautionary missive will help and should probably be read alongside existing papers under the microscope. It will also be of great benefit to editors and reviewers. As a good review should, this work contains a wealth of secondary references in this area and a very valuable table of chromatic and achromatic thresholds. A couple of recent publications missing are the recent special edition of Philosophical Transactions of the Royal Society B (Caro et al. 2017) that these authors also contribute to, and a thorough discussion by Price (2017). Based on this collective knowledge, as Olsson et al. point out, anyone can have a go at these models, even if there is missing data. Some sensible estimates can be made and as long as the results are treated as guidelines and ideas only. Go for it! The danger comes when one desired result from modeling is fed into a secondary conclusion without sufficient caution and attention to the caveats mentioned in this review. Perhaps the most important point in this respect is not to get distracted by pretty colors alone but include luminance contrast using a best estimate of the photoreceptors responsible for this. Also, do not expect to know exactly how luminance information interacts with color (or indeed polarization), this is a complex area needing more neurophysiology and behavioral dissection than most want to enter into. To hopefully paraphrase this valuable work in a paragraph. Do your best to measure or find the parameters needed, estimate sensibly where necessary, don’t become too wedded or proud of your results and try and both back up and also feed numbers into your modeling with some behavioral observation. All of us could do with a good dose of Konrad Lorensian perspective. Go look at your animal in the real world, not a deconstruction of it on a little screen. Recognize that what it is telling you is defined by how you are observing it, stimulus size, context, illumination level, etc, not what it is actually capable of throughout life. REFERENCES Caro T, Stoddard MC, Stuart-Fox D. 2017. Animal coloration: production, perception, function and application. Phil Trans R Soc Lond B . 372: 20170047. Google Scholar CrossRef Search ADS   Olsson P, Lind O, Kelber A. 2018. Chromatic and achromatic vision: parameter choice and limitations for reliable model predictions. Behav Ecol . 29: 273– 282. Google Scholar CrossRef Search ADS   Price TD. 2017. Sensory drive, color, and color vision. Am Nat . 190: 157– 170. Google Scholar CrossRef Search ADS PubMed  Vorobyev M, Osorio D. 1998. Receptor noise as a determinant of colour thresholds. Proc Biol Sci . 265: 351– 358. Google Scholar CrossRef Search ADS PubMed  © The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Behavioral Ecology Oxford University Press

Do not be distracted by pretty colors: a comment on Olsson et al.

Loading next page...
 
/lp/ou_press/do-not-be-distracted-by-pretty-colors-a-comment-on-olsson-et-al-BBB44TO3ed
Publisher
Oxford University Press
Copyright
© The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
ISSN
1045-2249
eISSN
1465-7279
D.O.I.
10.1093/beheco/arx164
Publisher site
See Article on Publisher Site

Abstract

The review by Olsson and colleagues on chromatic and achromatic models is a very useful read for the many behavioral ecologists and neuroethologists confused on how to do this, for those that may want to improve what they are already doing and for those at the point of deciding whether or not to do it. That said, it is not a guide on how to do it, but more, as the title states, a guide on pitfalls and the “limitations” of the currently favored model, the Vorbyev/Osorio receptor noise limited model (V/O RNL) (Vorobyev and Osorio 1998). Perhaps most importantly, Olsson et al. (2018) repeatedly note that some sort of behavioral calibration or verification is, if not essential, at least very desirable. This paper is by no means an easy read and will certainly be of most use to those who have already had a go at using the V/O RNL model. I personally hope it will be very useful to those who have had a go and reached the wrong conclusion because there are many out there that have and have nonetheless got the results published. One of the caveats, in fact not mentioned until the end of the review, is that this model is not suited for examining large just noticeable differences (jnds) but operates best around threshold jnd of 1–3 for example. This, along with other considerations also covered in the review, is often ignored and it has become difficult to decide where the right conclusion for the wrong reason or just the wrong conclusion has been drawn. This cautionary missive will help and should probably be read alongside existing papers under the microscope. It will also be of great benefit to editors and reviewers. As a good review should, this work contains a wealth of secondary references in this area and a very valuable table of chromatic and achromatic thresholds. A couple of recent publications missing are the recent special edition of Philosophical Transactions of the Royal Society B (Caro et al. 2017) that these authors also contribute to, and a thorough discussion by Price (2017). Based on this collective knowledge, as Olsson et al. point out, anyone can have a go at these models, even if there is missing data. Some sensible estimates can be made and as long as the results are treated as guidelines and ideas only. Go for it! The danger comes when one desired result from modeling is fed into a secondary conclusion without sufficient caution and attention to the caveats mentioned in this review. Perhaps the most important point in this respect is not to get distracted by pretty colors alone but include luminance contrast using a best estimate of the photoreceptors responsible for this. Also, do not expect to know exactly how luminance information interacts with color (or indeed polarization), this is a complex area needing more neurophysiology and behavioral dissection than most want to enter into. To hopefully paraphrase this valuable work in a paragraph. Do your best to measure or find the parameters needed, estimate sensibly where necessary, don’t become too wedded or proud of your results and try and both back up and also feed numbers into your modeling with some behavioral observation. All of us could do with a good dose of Konrad Lorensian perspective. Go look at your animal in the real world, not a deconstruction of it on a little screen. Recognize that what it is telling you is defined by how you are observing it, stimulus size, context, illumination level, etc, not what it is actually capable of throughout life. REFERENCES Caro T, Stoddard MC, Stuart-Fox D. 2017. Animal coloration: production, perception, function and application. Phil Trans R Soc Lond B . 372: 20170047. Google Scholar CrossRef Search ADS   Olsson P, Lind O, Kelber A. 2018. Chromatic and achromatic vision: parameter choice and limitations for reliable model predictions. Behav Ecol . 29: 273– 282. Google Scholar CrossRef Search ADS   Price TD. 2017. Sensory drive, color, and color vision. Am Nat . 190: 157– 170. Google Scholar CrossRef Search ADS PubMed  Vorobyev M, Osorio D. 1998. Receptor noise as a determinant of colour thresholds. Proc Biol Sci . 265: 351– 358. Google Scholar CrossRef Search ADS PubMed  © The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

Journal

Behavioral EcologyOxford University Press

Published: Mar 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off