Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Scenery image retrieval by meta‐feature representation

Scenery image retrieval by meta‐feature representation Purpose – Content‐based image retrieval suffers from the semantic gap problem: that images are represented by low‐level visual features, which are difficult to directly match to high‐level concepts in the user's mind during retrieval. To date, visual feature representation is still limited in its ability to represent semantic image content accurately. This paper seeks to address these issues. Design/methodology/approach – In this paper the authors propose a novel meta‐feature feature representation method for scenery image retrieval. In particular some class‐specific distances (namely meta‐features) between low‐level image features are measured. For example the distance between an image and its class centre, and the distances between the image and its nearest and farthest images in the same class, etc. Findings – Three experiments based on 190 concrete, 130 abstract, and 610 categories in the Corel dataset show that the meta‐features extracted from both global and local visual features significantly outperform the original visual features in terms of mean average precision. Originality/value – Compared with traditional local and global low‐level features, the proposed meta‐features have higher discriminative power for distinguishing a large number of conceptual categories for scenery image retrieval. In addition the meta‐features can be directly applied to other image descriptors, such as bag‐of‐words and contextual features. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Online Information Review Emerald Publishing

Scenery image retrieval by meta‐feature representation

Online Information Review , Volume 36 (4): 17 – Aug 3, 2012

Loading next page...
 
/lp/emerald-publishing/scenery-image-retrieval-by-meta-feature-representation-IUDQtqXFZ3

References (37)

Publisher
Emerald Publishing
Copyright
Copyright © 2012 Emerald Group Publishing Limited. All rights reserved.
ISSN
1468-4527
DOI
10.1108/14684521211254040
Publisher site
See Article on Publisher Site

Abstract

Purpose – Content‐based image retrieval suffers from the semantic gap problem: that images are represented by low‐level visual features, which are difficult to directly match to high‐level concepts in the user's mind during retrieval. To date, visual feature representation is still limited in its ability to represent semantic image content accurately. This paper seeks to address these issues. Design/methodology/approach – In this paper the authors propose a novel meta‐feature feature representation method for scenery image retrieval. In particular some class‐specific distances (namely meta‐features) between low‐level image features are measured. For example the distance between an image and its class centre, and the distances between the image and its nearest and farthest images in the same class, etc. Findings – Three experiments based on 190 concrete, 130 abstract, and 610 categories in the Corel dataset show that the meta‐features extracted from both global and local visual features significantly outperform the original visual features in terms of mean average precision. Originality/value – Compared with traditional local and global low‐level features, the proposed meta‐features have higher discriminative power for distinguishing a large number of conceptual categories for scenery image retrieval. In addition the meta‐features can be directly applied to other image descriptors, such as bag‐of‐words and contextual features.

Journal

Online Information ReviewEmerald Publishing

Published: Aug 3, 2012

Keywords: Image retrieval; Feature extraction; Feature representation; Class‐specific distances; Meta‐features; Digital images; Image processing

There are no references for this article.