%0 Journal Article %@ 01651684 %A Javvad ur Rehman, M. %A Dass, S.C. %A Asirvadam, V.S. %D 2019 %F scholars:11520 %I Elsevier B.V. %J Signal Processing %K Bayesian networks; Dynamical systems; Inference engines; Markov processes; Nonlinear dynamical systems; State space methods, Bayesian inference; Learning procedures; Markov Chain Monte-Carlo; MCMC; Measurement equations; Posterior distributions; State - space models; Sufficient statistics, Sampling %P 32-44 %R 10.1016/j.sigpro.2019.02.020 %T An augmented sequential MCMC procedure for particle based learning in dynamical systems %U https://khub.utp.edu.my/scholars/11520/ %V 160 %X Dynamical systems elicited via state space models are systems that consist of two components: a state and a measurement equation model that evolve over time. This paper addresses Bayesian inference of unknown parameters, or parameter learning, of such systems. Particle-based parameter learning methods form a well-known class of procedures for obtaining inference in state space models where a collection of particles are used to represent the posterior distributions of parameters. However, particle-based learning procedures require the availability of sufficient statistics and tractable posterior distributions of parameters based on these statistics for sampling, which is not always the case in many situations. We address the problem of particle-based learning when sufficient statistics and tractable distributions for sampling are not available. An augmented sequential Markov Chain Monte Carlo (ASMCMC) algorithm is developed for obtaining the posterior distribution of unknown parameters. We provide three guiding examples of nonlinear dynamical systems for which sufficient statistics and tractable distributions for sampling are not available, and illustrate the proposed ASMCMC methodology on these examples based on simulated data. © 2019 Elsevier B.V. %Z cited By 3