Abstract
It has been shown in many different contexts that the Generalized Gaussian (GG) distribution represents a flexible and suitable tool for data modeling. Almost all the reported applications are focused on modeling points (fixed length vectors); a different but crucial scenario, where the employment of the GG has received little attention, is the modeling of sequential data, i.e. variable length vectors. This paper explores this last direction, describing a variant of the well known Hidden Markov Model (HMM) where the emission probability function of each state is represented by a GG. A training strategy based on the Expectation Maximization (EM) algorithm is presented. Different experiments using both synthetic and real data (EEG signal classification and face recognition) show the suitability of the proposed approach compared with the standard Gaussian HMM.