TCB Publications - Abstract

K.-R. Müller, N. Murata, M. Finke, K. Schulten, and S. Amari. A numerical study of learning curves in stochastic multi-layer feed-forward networks. Neural Computation, 8:1085-1106, 1995.

MULL95A The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multi-layer feed-forward networks trained with back-propagation are investigated. In the range of a large number of training patterns t, the asymptotic generalization error scales as 1/t as predicted. For a medium range t a faster $1/t^{2}$ scaling is observed. This effect is explained by using higher order corrections of the likelihood expansion. It is shown for small t that the scaling law changes drastically, when the network undergoes a transition from ineffective to effective learning.

Download Full Text

The manuscripts available on our site are provided for your personal use only and may not be retransmitted or redistributed without written permissions from the paper's publisher and author. You may not upload any of this site's material to any public server, on-line service, network, or bulletin board without prior written permission from the publisher and author. You may not make copies for any commercial purpose. Reproduction or storage of materials retrieved from this web site is subject to the U.S. Copyright Act of 1976, Title 17 U.S.C.

Download full text: Journal, PDF (788.8KB)