Do we feel colours? A systematic review of 128years of psychological research linking colours and emotionsJonauskaite, Domicele; Mohr, Christine
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02615-zpmid: 39806242
Colour is an integral part of natural and constructed environments. For many, it also has an aesthetic appeal, with some colours being more pleasant than others. Moreover, humans seem to systematically and reliably associate colours with emotions, such as yellow with joy, black with sadness, light colours with positive and dark colours with negative emotions. To systematise such colour–emotion correspondences, we identified 132 relevant peer-reviewed articles published in English between 1895 and 2022. These articles covered a total of 42,266 participants from 64 different countries. We found that all basic colour categories had systematic correspondences with affective dimensions (valence, arousal, power) as well as with discrete affective terms (e.g., love, happy, sad, bored). Most correspondences were many-to-many, with systematic effects driven by lightness, saturation, and hue (‘colour temperature’). More specifically, (i) LIGHT and DARK colours were associated with positive and negative emotions, respectively; (ii) RED with empowering, high arousal positive and negative emotions; (iii) YELLOW and ORANGE with positive, high arousal emotions; (iv) BLUE, GREEN, GREEN–BLUE, and WHITE with positive, low arousal emotions; (v) PINK with positive emotions; (vi) PURPLE with empowering emotions; (vii) GREY with negative, low arousal emotions; and (viii) BLACK with negative, high arousal emotions. Shared communication needs might explain these consistencies across studies, making colour an excellent medium for communication of emotion. As most colour–emotion correspondences were tested on an abstract level (i.e., associations), it remains to be seen whether such correspondences translate to the impact of colour on experienced emotions and specific contexts.
The cost of perspective switching: Constraints on simultaneous activationSegal, Dorit
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02633-xpmid: 39806243
Visual perspective taking often involves transitioning between perspectives, yet the cognitive mechanisms underlying this process remain unclear. The current study draws on insights from task- and language-switching research to address this gap. In Experiment 1, 79 participants judged the perspective of an avatar positioned in various locations, observing either the rectangular or the square side of a rectangular cube hanging from the ceiling. The avatar's perspective was either consistent or inconsistent with the participant’s, and its computation sometimes required mental transformation. The task included both single-position blocks, in which the avatar's location remained fixed across all trials, and mixed-position blocks, in which the avatar's position changed across trials. Performance was compared across trial types and positions. In Experiment 2, 126 participants completed a similar task administered online, with more trials, and performance was compared at various points within the response time distribution (vincentile analysis). Results revealed a robust switching cost. However, mixing costs, which reflect the ability to maintain multiple task sets active in working memory, were absent, even in slower response times. Additionally, responses to the avatar's position varied as a function of consistency with the participants' viewpoint and the angular disparity between them. These findings suggest that perspective switching is costly, people cannot activate multiple perspectives simultaneously, and the computation of other people's visual perspectives varies with cognitive demands.
Increased attention towards progress information near a goal stateDevine, Sean; Dong, Y. Doug; Silva, Martin Sellier; Roy, Mathieu; Otto, A. Ross
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02636-8pmid: 39806241
A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal—as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task. Replicating past work, we found that participants increased cognitive effort exertion near a goal, as evinced by an increase in correct responses per second. More interestingly, we found that the rate at which participants attended to goal progress information—operationalized here as the frequency of gazes towards a progress bar—increased steeply near a goal state. In other words, participants extracted information from the progress bar at a higher rate when goals were proximal (versus distal). In exploratory analysis of tonic pupil diameter, we also found that tonic pupil size increased sharply as participants approached a goal state, mirroring the pattern of gaze. These results support the view that people attend to progress information more as they approach a goal.
Parameter identifiability in evidence-accumulation models: The effect of error rates on the diffusion decision model and the linear ballistic accumulatorLüken, Malte; Heathcote, Andrew; Haaf, Julia M.; Matzke, Dora
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02621-1pmid: 39777607
A variety of different evidence-accumulation models (EAMs) account for common response time and accuracy patterns in two-alternative forced choice tasks by assuming that subjects collect and sum information from their environment until a response threshold is reached. Estimates of model parameters mapped to components of this decision process can be used to explain the causes of observed behavior. However, such explanations are only meaningful when parameters can be identified, that is, when their values can be uniquely estimated from data generated by the model. Prior studies suggest that parameter identifiability is poor when error rates are low but have not systematically compared this issue across different EAMs. We conducted a simulation study investigating the identifiability and estimation properties of model parameters at low error rates in the two most popular EAMs: The diffusion decision model (DDM) and the linear ballistic accumulator (LBA). We found poor identifiability at low error rates for both models but less so for the DDM and for a larger number of trials. The DDM also showed better identifiability than the LBA at low trial numbers for a design with a manipulation of response caution. Based on our results, we recommend tasks with error rates between 15% and 35% for small, and between 5% and 35% for large trial numbers. We explain the identifiability problem in terms of trade-offs caused by correlations between decision-threshold and accumulation-rate parameters and discuss why the models differ in terms of their estimation properties.
Distinct detection and discrimination sensitivities in visual processing of real versus unreal optic flowLi, Li; Shen, Xuechun; Kuai, Shuguang
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02616-ypmid: 39810018
We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both "real" optic flow stimuli containing information about self-movement in a three-dimensional scene and "unreal" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli. Furthermore, while discrimination sensitivity to the CoM location was not affected by stimulus duration for unreal optic flow stimuli, it showed a significant improvement when stimulus duration increased from 100 to 400 ms for real optic flow stimuli. These findings provide compelling evidence that the visual system employs distinct processing approaches for real versus unreal optic flow even when they are perfectly matched for two-dimensional global features and local motion signals. These differences reveal influences of self-movement in natural environments, enabling the visual system to uniquely process stimuli with significant survival implications.
Memories of hand movements are tied to speech through learningLametti, Daniel R.; Vaillancourt, Gina L.; Whitman, Maura A.; Skipper, Jeremy I.
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02618-wpmid: 39753820
Hand movements frequently occur with speech. The extent to which the memories that guide co-speech hand movements are tied to the speech they occur with is unclear. Here, we paired the acquisition of a new hand movement with speech. Thirty participants adapted a ballistic hand movement of a joystick to a visuomotor rotation either in isolation or while producing a word in time with their movements. Within participants, the after-effect of adaptation (i.e., the motor memory) was examined with or without co-incident speech. After-effects were greater for hand movements produced in the context in which adaptation occurred – i.e., with or without speech. In a second experiment, 30 new participants adapted a hand movement while saying the words “tap” or “hit”. After-effects were greater when hand movements occurred with the specific word produced during adaptation. The results demonstrate that memories of co-speech hand movements are partially tied to the speech they are learned with. The findings have implications for theories of sensorimotor control and our understanding of the relationship between gestures, speech and meaning.
Cracking arbitrariness: A data-driven study of auditory iconicity in spoken Englishde Varda, Andrea Gregor; Marelli, Marco
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02630-0pmid: 39779657
Auditory iconic words display a phonological profile that imitates their referents’ sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this article, we challenge this assumption by assessing the pervasiveness of onomatopoeia in the English auditory vocabulary through a novel data-driven procedure. We embed spoken words and natural sounds into a shared auditory space through (a) a short-time Fourier transform, (b) a convolutional neural network trained to classify sounds, and (c) a network trained on speech recognition. Then, we employ the obtained vector representations to measure their objective auditory resemblance. These similarity indexes show that imitation is not limited to some circumscribed semantic categories, but instead can be considered as a widespread mechanism underlying the structure of the English auditory vocabulary. We finally empirically validate our similarity indexes as measures of iconicity against human judgments.
Generating distant analogies increases metaphor productionGeorge, Tim; Christofalos, Andriana L.; Pambuccian, Felix S.
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02628-8pmid: 39753821
Although a large body of work has explored the mechanisms underlying metaphor comprehension, less research has focused on spontaneous metaphor production. Previous research suggests that reasoning about analogies can induce a relational mindset, which causes a greater focus on underlying abstract similarities. We explored how inducing a relational mindset may increase the tendency to use metaphors to describe topics. Participants first solved a series of either cross-domain (i.e., far) analogies (kitten:cat::spark-?) to induce a high relational mindset or within-domain (i.e., near) analogies (kitten:cat::puppy-?) (control condition). Next, they received a series of topic descriptions containing either one feature (some jobs are confining) or three features (some jobs are confining, repetitive, and unpleasant), and were asked to provide a summary phrase of the topic. Use of metaphoric language increased when topics contained more features, and was particularly frequent in the high relational mindset condition. This finding suggests that the relational mindset induction may have shifted attention toward abstract comparisons, thereby facilitating the creative use of language involving metaphors.
Product, not process: Metacognitive monitoring of visual performance during sustained attentionKim, Cheongil; Chong, Sang Chul
2025 Psychonomic Bulletin & Review
doi: 10.3758/s13423-024-02635-9pmid: 39789201
The performance of the human visual system exhibits moment-to-moment fluctuations influenced by multiple neurocognitive factors. To deal with this instability of the visual system, introspective awareness of current visual performance (metacognitive monitoring) may be crucial. In this study, we investigate whether and how people can monitor their own visual performance during sustained attention by adopting confidence judgments as indicators of metacognitive monitoring – assuming that if participants can monitor visual performance, confidence judgments will accurately track performance fluctuations. In two experiments (N=\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$=$$\end{document} 40), we found that participants were able to monitor fluctuations in visual performance during sustained attention. Importantly, metacognitive monitoring largely relied on the quality of target perception, a product of visual processing (“I lack confidence in my performance because I only caught a glimpse of the target”), rather than the states of the visual system during visual processing (“I lack confidence because I was not focusing on the task").