2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops
Download PDF

Abstract

We present an efficient object retrieval system based on the identification of abstract deformable shapeclassesusingtheself-similaritydescriprofShechtmanandIrani.Givenauser-specifiedqueryobject,weretrieveotherimaswhichshareacommonshape' even if their appearance differs greatly in terms of colour, texture, edges and other common photometric properties. In order to use the self-similarity descriptor for efficient retrieval we make three contributions: (i) we sparsify the descriptor points by locating discriminative regions within each image, thus reducing the computational expense of shape matching; (ii) we extend to enable matching despite changes in scale; and (iii) we show that vector quantizing the descriptor does not inhibit performance, thus providing the basis of a large-scale shape-based retrieval system using a bag-of-visual-words approach. Performance is demonstrated on the challenging ETHZ deformable shape dataset and a full episode from the television series Lost, and is shown to be superior to appearance-based approaches for matching non-rigid shape classes.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles