TY - JOUR EP - 124846 PB - Institute of Electrical and Electronics Engineers Inc. SN - 21693536 N1 - cited By 1 TI - Pluggable Micronetwork for Layer Configuration Relay in a Dynamic Deep Neural Surface SP - 124831 AV - none A1 - Khan, F.U. A1 - Aziz, I.B. A1 - Akhir, E.A.P. JF - IEEE Access UR - https://www.scopus.com/inward/record.uri?eid=2-s2.0-85114735064&doi=10.1109%2fACCESS.2021.3110709&partnerID=40&md5=615f86c67997931e40baf01a31a4a917 VL - 9 Y1 - 2021/// N2 - The classical convolution neural network architecture adheres to static declaration procedures, which means that the shape of computation is usually predefined and the computation graph is fixed. In this research, the concept of a pluggable micronetwork, which relaxes the static declaration constraint by dynamic layer configuration relay, is proposed. The micronetwork consists of several parallel convolutional layer configurations and relays only the layer settings, incurring a minimum loss. The configuration selection logic is based on the conditional computation method, which is implemented as an output layer of the proposed micronetwork. The proposed micronetwork is implemented as an independent pluggable unit and can be used anywhere on the deep learning decision surface with no or minimal configuration changes. The MNIST, FMNIST, CIFAR-10 and STL-10 datasets have been used to validate the proposed research. The proposed technique is proven to be efficient and achieves appropriate validity of the research by obtaining state-of-the-art performance in fewer iterations with wider and compact convolution models. We also naively attempt to discuss the involved computational complexities in these advanced deep neural structures. © 2013 IEEE. ID - scholars15664 KW - Computation theory; Convolution; Network architecture KW - Computation methods; Convolution model; Convolution neural network; Decision surfaces; Layer configuration; Minimum loss; Neural structures; State-of-the-art performance KW - Deep learning ER -