Proceedings of the 17th International Conference on Pattern Recognition
Download PDF

Abstract

The Dynamic Decay Adjustment (DDA) algorithm is a fast constructive algorithm for training RBF neural networks. In previous works it has been shown that for some datasets the generalization performance of RBF-DDA depends only weakly on the algorithm parameters θ+ and θ-. However, we have observed experimentally that for some problems performance is considerably dependent on the value of θ-. In this work we propose a method for selecting the value of θ- for performance optimization. The proposed method has been evaluated on three optical recognition datasets from the UCI repository. The results show that the proposed method considerably improves the performance of RBF-DDA with default parameters on these tasks. The results are compared to MLP and k-NN results obtained in previous works. It is shown that the method proposed in this paper outperforms MLPs and obtains results comparable to k-NN on these tasks.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles