%0 Conference Paper %A Izzeldin, H. %A Asirvadam, V.S. %A Saad, N. %D 2010 %F scholars:980 %K Bench-mark problems; Broyden; Computational time; Conjugate gradient; Conjugate gradient algorithms; Memory usage; Multilayer perceptron neural networks; Offline learning; Prediction errors; Training algorithms, Algorithms; Conjugate gradient method; Engineering research; Innovation; Network architecture, Neural networks %P 139-143 %R 10.1109/SCORED.2010.5703989 %T Enhanced conjugate gradient methods for training MLP-networks %U https://khub.utp.edu.my/scholars/980/ %X The paper investigates the enhancement in various conjugate gradient training algorithms applied to a multilayer perceptron (MLP) neural network architecture. The paper investigates seven different conjugate gradient algorithms proposed by different researchers from 1952-2005, the classical batch back propagation, full-memory and memory-less BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithms. These algorithms are tested in predicting fluid height in two different control tank benchmark problems. Simulations results show that Full-Memory BFGS has overall better performance or less prediction error however it has higher memory usage and longer computational time conjugate gradients. ©2010 IEEE. %Z cited By 4; Conference of 2010 8th IEEE Student Conference on Research and Development - Engineering: Innovation and Beyond, SCOReD 2010 ; Conference Date: 13 December 2010 Through 14 December 2010; Conference Code:83885