Enhanced conjugate gradient methods for training MLP-networks

Izzeldin, H. and Asirvadam, V.S. and Saad, N. (2010) Enhanced conjugate gradient methods for training MLP-networks. In: UNSPECIFIED.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

The paper investigates the enhancement in various conjugate gradient training algorithms applied to a multilayer perceptron (MLP) neural network architecture. The paper investigates seven different conjugate gradient algorithms proposed by different researchers from 1952-2005, the classical batch back propagation, full-memory and memory-less BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithms. These algorithms are tested in predicting fluid height in two different control tank benchmark problems. Simulations results show that Full-Memory BFGS has overall better performance or less prediction error however it has higher memory usage and longer computational time conjugate gradients. ©2010 IEEE.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Additional Information: cited By 4; Conference of 2010 8th IEEE Student Conference on Research and Development - Engineering: Innovation and Beyond, SCOReD 2010 ; Conference Date: 13 December 2010 Through 14 December 2010; Conference Code:83885
Uncontrolled Keywords: Bench-mark problems; Broyden; Computational time; Conjugate gradient; Conjugate gradient algorithms; Memory usage; Multilayer perceptron neural networks; Offline learning; Prediction errors; Training algorithms, Algorithms; Conjugate gradient method; Engineering research; Innovation; Network architecture, Neural networks
Depositing User: Mr Ahmad Suhairi UTP
Date Deposited: 09 Nov 2023 15:49
Last Modified: 09 Nov 2023 15:49
URI: https://khub.utp.edu.my/scholars/id/eprint/980

Actions (login required)

View Item
View Item