Abstract
Multimodal dialogue management capabilities involve and combine input and output from various interaction modalities and technologies. In this paper, we present our research done in the framework of the European Project Indigo. In this project, we try to communicate in a natural way with a virtual human and possibly with a very realistic robot skin-based face head. We have defined memory models, recognition of emotions, and dialogue interaction based on recognition of emotions of the user. We can have a dialogue either with a virtual human or with our robot head that can recognize us, ask some specific questions concerning our habits and is able to understand our answers and behave accordingly.