2012 IEEE Conference on Computer Vision and Pattern Recognition
Download PDF

Abstract

Identifying suitable image features is a central challenge in computer vision, ranging from representations for low-level to high-level vision. Due to the difficulty of this task, techniques for learning features directly from example data have recently gained attention. Despite significant benefits, these learned features often have many fewer of the desired invariances or equivariances than their hand-crafted counterparts. While translation in-/equivariance has been addressed, the issue of learning rotation-invariant or equivariant representations is hardly explored. In this paper we describe a general framework for incorporating invariance to linear image transformations into product models for feature learning. A particular benefit is that our approach induces transformation-aware feature learning, i.e. it yields features that have a notion with which specific image transformation they are used. We focus our study on rotation in-/equivariance and show the advantages of our approach in learning rotation-invariant image priors and in building rotation-equivariant and invariant descriptors of learned features, which result in state-of-the-art performance for rotation-invariant object detection.
Like what you’re reading?
Already a member?Sign In
Member Price
$11
Non-Member Price
$21
Add to CartSign In
Get this article FREE with a new membership!

Related Articles