2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)
Download PDF

Abstract

Billions of geotagged ground-level images are available via social networks and Google Street View. Recent work in computer vision has explored how these images could serve as a resource for understanding our world. However, most ground-level images are captured in cities and around famous landmarks; there are still very large geographic regions with few images. This leads to artifacts when estimating geospatial distributions. We propose to leverage satellite imagery, which has dense spatial coverage and increasingly high temporal frequency, to address this problem. We introduce Cross-view ConvNets (CCNs), a novel approach for estimating geospatial distributions in which semantic labels of ground-level imagery are transferred to satellite imagery to enable more accurate predictions.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles