Abstract
Multi-modal sensor fusion recently became a widespread technique to provide pervasive services with context-recognition capabilities. However, classifiers commonly used to implement this technique are still far from being perfect. Thus, fusion algorithms able to deal with significant inaccuracies are required. In this paper we present preliminary results obtained with a novel approach that combines diverse classifiers through commonsense reasoning. The approach maps classification labels produced by classifiers to concepts organized within the ConceptNet network. Then it verifies their semantic proximity by implementing a greedy sub-graph search algorithm. Specifically, different classifiers are fused together on a commonsense basis for both: (i) improving classification accuracy and (ii) dealing with missing labels. Experimental results are discussed through a real-world case study in which three classifiers are fused to recognize both user activities and locations.