Abstract
The rise of wearable devices has led to many new ways of re-identifying an individual. Unlike static cameras, where the views are often restricted or zoomed out and occlusions are common scenarios, first-person-views (FPVs) or ego-centric views see people closely and mostly get un-occluded face images. In this paper, we propose a face re-identification framework designed for a network of multiple wearable devices. This framework utilizes a global data association method termed as Network Consistent Reidentification (NCR) that not only helps in maintaining consistency in association results across the network, but also improves the pair-wise face re-identification accuracy. To test the proposed pipeline, we collected a database of FPV videos of 72 persons using multiple wearable devices (such as Google Glasses) in a multi-storied office environment. Experimental results indicate that NCR is able to consistently achieve large performance gains when compared to the state-of-the-art methodologies.