TY - CONF ID - scholars980 UR - https://www.scopus.com/inward/record.uri?eid=2-s2.0-79951986009&doi=10.1109%2fSCORED.2010.5703989&partnerID=40&md5=abec87933aec16425408303635d9a44b A1 - Izzeldin, H. A1 - Asirvadam, V.S. A1 - Saad, N. N1 - cited By 4; Conference of 2010 8th IEEE Student Conference on Research and Development - Engineering: Innovation and Beyond, SCOReD 2010 ; Conference Date: 13 December 2010 Through 14 December 2010; Conference Code:83885 EP - 143 CY - Kuala Lumpur SN - 9781424486489 N2 - The paper investigates the enhancement in various conjugate gradient training algorithms applied to a multilayer perceptron (MLP) neural network architecture. The paper investigates seven different conjugate gradient algorithms proposed by different researchers from 1952-2005, the classical batch back propagation, full-memory and memory-less BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithms. These algorithms are tested in predicting fluid height in two different control tank benchmark problems. Simulations results show that Full-Memory BFGS has overall better performance or less prediction error however it has higher memory usage and longer computational time conjugate gradients. ©2010 IEEE. TI - Enhanced conjugate gradient methods for training MLP-networks KW - Bench-mark problems; Broyden; Computational time; Conjugate gradient; Conjugate gradient algorithms; Memory usage; Multilayer perceptron neural networks; Offline learning; Prediction errors; Training algorithms KW - Algorithms; Conjugate gradient method; Engineering research; Innovation; Network architecture KW - Neural networks Y1 - 2010/// SP - 139 AV - none ER -