Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

MSANet: multimodal self-augmentation and adversarial network for RGB-D object recognition

MSANet: multimodal self-augmentation and adversarial network for RGB-D object recognition This paper researches on the problem of object recognition using RGB-D data. Although deep convolutional neural networks have so far made progress in this area, they are still suffering a lot from lack of large-scale manually labeled RGB-D data. Labeling large-scale RGB-D dataset is a time-consuming and boring task. More importantly, such large-scale datasets often exist a long tail, and those hard positive examples of the tail can hardly be recognized. To solve these problems, we propose a multimodal self-augmentation and adversarial network (MSANet) for RGB-D object recognition, which can augment the data effectively at two levels while keeping the annotations. Toward the first level, series of transformations are leveraged to generate class-agnostic examples for each instance, which supports the training of our MSANet. Toward the second level, an adversarial network is proposed to generate class-specific hard positive examples while learning to classify them correctly to further improve the performance of our MSANet. Via the above schemes, the proposed approach wins the best results on several available RGB-D object recognition datasets, e.g., our experimental results indicate a 1.5% accuracy boost on benchmark Washington RGB-D object dataset compared with the current state of the art. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Visual Computer Springer Journals

MSANet: multimodal self-augmentation and adversarial network for RGB-D object recognition

The Visual Computer , Volume 35 (11) – May 29, 2018

Loading next page...
 
/lp/springer_journal/msanet-multimodal-self-augmentation-and-adversarial-network-for-rgb-d-oR4KMbU8ua
Publisher
Springer Journals
Copyright
Copyright © 2018 by Springer-Verlag GmbH Germany, part of Springer Nature
Subject
Computer Science; Computer Graphics; Computer Science, general; Artificial Intelligence; Image Processing and Computer Vision
ISSN
0178-2789
eISSN
1432-2315
DOI
10.1007/s00371-018-1559-x
Publisher site
See Article on Publisher Site

Abstract

This paper researches on the problem of object recognition using RGB-D data. Although deep convolutional neural networks have so far made progress in this area, they are still suffering a lot from lack of large-scale manually labeled RGB-D data. Labeling large-scale RGB-D dataset is a time-consuming and boring task. More importantly, such large-scale datasets often exist a long tail, and those hard positive examples of the tail can hardly be recognized. To solve these problems, we propose a multimodal self-augmentation and adversarial network (MSANet) for RGB-D object recognition, which can augment the data effectively at two levels while keeping the annotations. Toward the first level, series of transformations are leveraged to generate class-agnostic examples for each instance, which supports the training of our MSANet. Toward the second level, an adversarial network is proposed to generate class-specific hard positive examples while learning to classify them correctly to further improve the performance of our MSANet. Via the above schemes, the proposed approach wins the best results on several available RGB-D object recognition datasets, e.g., our experimental results indicate a 1.5% accuracy boost on benchmark Washington RGB-D object dataset compared with the current state of the art.

Journal

The Visual ComputerSpringer Journals

Published: May 29, 2018

References