A new learning algorithm-forward propagation (FP) of multilayered feed-forward neural networks is presented in this paper. The authors show that as an associative memory the network constructed by the FP algorithm has several advantages. (1)Each training sample is an attractive center. (2) The attractive radius of each training sample reaches the maximum. (3) There is no spurious attractive center in the network.(4) The network has minimal number of elements. (5) The order of its learning complexity is optimal. The FP learning algorithm is also an effective synthesis tool, i. e., the network architecture can be constructed during its learning process.