2009 IEEE International Conference on
Systems, Man, and Cybernetics |
![]() |
Abstract
The R4-rule is a heuristic algorithm for distance-based neural network (DBNN) learning. Experimental results show that the R4-rule can obtain the smallest or nearly smallest DBNNs. However, the computational cost of the R4-rule is relatively high because the learning vector quantization (LVQ) algorithm is used iteratively in learning. To reduce the cost of the R4-rule, this paper investigate three approaches. The first one is called distance preservation (DP), which tries to reduce the number of times for calculating the distances, and the other two are based on the attentional learning concept, which try to reduce the number of data used for learning. The efficiency of these methods is verified through experiments on several public databases.