The review by Olsson and colleagues on chromatic and achromatic models is a very useful read for the many behavioral ecologists and neuroethologists confused on how to do this, for those that may want to improve what they are already doing and for those at the point of deciding whether or not to do it. That said, it is not a guide on how to do it, but more, as the title states, a guide on pitfalls and the “limitations” of the currently favored model, the Vorbyev/Osorio receptor noise limited model (V/O RNL) (Vorobyev and Osorio 1998). Perhaps most importantly, Olsson et al. (2018) repeatedly note that some sort of behavioral calibration or verification is, if not essential, at least very desirable. This paper is by no means an easy read and will certainly be of most use to those who have already had a go at using the V/O RNL model. I personally hope it will be very useful to those who have had a go and reached the wrong conclusion because there are many out there that have and have nonetheless got the results published. One of the caveats, in fact not mentioned until the end of the review, is that this model is not suited for examining large just noticeable differences (jnds) but operates best around threshold jnd of 1–3 for example. This, along with other considerations also covered in the review, is often ignored and it has become difficult to decide where the right conclusion for the wrong reason or just the wrong conclusion has been drawn. This cautionary missive will help and should probably be read alongside existing papers under the microscope. It will also be of great benefit to editors and reviewers. As a good review should, this work contains a wealth of secondary references in this area and a very valuable table of chromatic and achromatic thresholds. A couple of recent publications missing are the recent special edition of Philosophical Transactions of the Royal Society B (Caro et al. 2017) that these authors also contribute to, and a thorough discussion by Price (2017). Based on this collective knowledge, as Olsson et al. point out, anyone can have a go at these models, even if there is missing data. Some sensible estimates can be made and as long as the results are treated as guidelines and ideas only. Go for it! The danger comes when one desired result from modeling is fed into a secondary conclusion without sufficient caution and attention to the caveats mentioned in this review. Perhaps the most important point in this respect is not to get distracted by pretty colors alone but include luminance contrast using a best estimate of the photoreceptors responsible for this. Also, do not expect to know exactly how luminance information interacts with color (or indeed polarization), this is a complex area needing more neurophysiology and behavioral dissection than most want to enter into. To hopefully paraphrase this valuable work in a paragraph. Do your best to measure or find the parameters needed, estimate sensibly where necessary, don’t become too wedded or proud of your results and try and both back up and also feed numbers into your modeling with some behavioral observation. All of us could do with a good dose of Konrad Lorensian perspective. Go look at your animal in the real world, not a deconstruction of it on a little screen. Recognize that what it is telling you is defined by how you are observing it, stimulus size, context, illumination level, etc, not what it is actually capable of throughout life. REFERENCES Caro T, Stoddard MC, Stuart-Fox D. 2017. Animal coloration: production, perception, function and application. Phil Trans R Soc Lond B . 372: 20170047. Google Scholar CrossRef Search ADS Olsson P, Lind O, Kelber A. 2018. Chromatic and achromatic vision: parameter choice and limitations for reliable model predictions. Behav Ecol . 29: 273– 282. Google Scholar CrossRef Search ADS Price TD. 2017. Sensory drive, color, and color vision. Am Nat . 190: 157– 170. Google Scholar CrossRef Search ADS PubMed Vorobyev M, Osorio D. 1998. Receptor noise as a determinant of colour thresholds. Proc Biol Sci . 265: 351– 358. Google Scholar CrossRef Search ADS PubMed © The Author(s) 2017. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org
Behavioral Ecology – Oxford University Press
Published: Mar 1, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera