Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Image editing-based data augmentation for illumination-insensitive background subtraction

Image editing-based data augmentation for illumination-insensitive background subtraction A core challenge in background subtraction (BGS) is handling videos with sudden illumination changes in consecutive frames. In our pilot study published in, Sakkos:SKIMA 2019, we tackle the problem from a data point-of-view using data augmentation. Our method performs data augmentation that not only creates endless data on the fly but also features semantic transformations of illumination which enhance the generalisation of the model.Design/methodology/approachIn our pilot study published in SKIMA 2019, the proposed framework successfully simulates flashes and shadows by applying the Euclidean distance transform over a binary mask generated randomly. In this paper, we further enhance the data augmentation framework by proposing new variations in image appearance both locally and globally.FindingsExperimental results demonstrate the contribution of the synthetics in the ability of the models to perform BGS even when significant illumination changes take place.Originality/valueSuch data augmentation allows us to effectively train an illumination-invariant deep learning model for BGS. We further propose a post-processing method that removes noise from the output binary map of segmentation, resulting in a cleaner, more accurate segmentation map that can generalise to multiple scenes of different conditions. We show that it is possible to train deep learning models even with very limited training samples. The source code of the project is made publicly available at https://github.com/dksakkos/illumination_augmentation http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Enterprise Information Management Emerald Publishing

Image editing-based data augmentation for illumination-insensitive background subtraction

Loading next page...
 
/lp/emerald-publishing/image-editing-based-data-augmentation-for-illumination-insensitive-jheSiumljO

References (56)

Publisher
Emerald Publishing
Copyright
© Emerald Publishing Limited
ISSN
1741-0398
DOI
10.1108/jeim-02-2020-0042
Publisher site
See Article on Publisher Site

Abstract

A core challenge in background subtraction (BGS) is handling videos with sudden illumination changes in consecutive frames. In our pilot study published in, Sakkos:SKIMA 2019, we tackle the problem from a data point-of-view using data augmentation. Our method performs data augmentation that not only creates endless data on the fly but also features semantic transformations of illumination which enhance the generalisation of the model.Design/methodology/approachIn our pilot study published in SKIMA 2019, the proposed framework successfully simulates flashes and shadows by applying the Euclidean distance transform over a binary mask generated randomly. In this paper, we further enhance the data augmentation framework by proposing new variations in image appearance both locally and globally.FindingsExperimental results demonstrate the contribution of the synthetics in the ability of the models to perform BGS even when significant illumination changes take place.Originality/valueSuch data augmentation allows us to effectively train an illumination-invariant deep learning model for BGS. We further propose a post-processing method that removes noise from the output binary map of segmentation, resulting in a cleaner, more accurate segmentation map that can generalise to multiple scenes of different conditions. We show that it is possible to train deep learning models even with very limited training samples. The source code of the project is made publicly available at https://github.com/dksakkos/illumination_augmentation

Journal

Journal of Enterprise Information ManagementEmerald Publishing

Published: Apr 24, 2023

Keywords: Background subtraction; Convolutional neural networks; Synthetics; Data augmentation; Illumination-invariant

There are no references for this article.