Neural Networks, IEEE - INNS - ENNS International Joint Conference on
Download PDF

Abstract

Biasing the hypothesis space of a learner has been shown to improve generalization performances. Methods for achieving this goal have been proposed, that range from deriving and introducing a bias into a learner to automatically learning the bias. In the latter case, most methods learn the bias by simultaneously training several related tasks derived from the same domain and imposing constraints on their parameters. We extend some of the ideas presented in this field and describe a new model that parameterizes the parameters of each task as a function of an affine manifold defined in parameter space and a point lying on the manifold. An analysis of variance on a class of learning tasks is performed that shows some significantly improved performances when using the model.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!