%K Computation theory; Convolution; Network architecture, Computation methods; Convolution model; Convolution neural network; Decision surfaces; Layer configuration; Minimum loss; Neural structures; State-of-the-art performance, Deep learning %X The classical convolution neural network architecture adheres to static declaration procedures, which means that the shape of computation is usually predefined and the computation graph is fixed. In this research, the concept of a pluggable micronetwork, which relaxes the static declaration constraint by dynamic layer configuration relay, is proposed. The micronetwork consists of several parallel convolutional layer configurations and relays only the layer settings, incurring a minimum loss. The configuration selection logic is based on the conditional computation method, which is implemented as an output layer of the proposed micronetwork. The proposed micronetwork is implemented as an independent pluggable unit and can be used anywhere on the deep learning decision surface with no or minimal configuration changes. The MNIST, FMNIST, CIFAR-10 and STL-10 datasets have been used to validate the proposed research. The proposed technique is proven to be efficient and achieves appropriate validity of the research by obtaining state-of-the-art performance in fewer iterations with wider and compact convolution models. We also naively attempt to discuss the involved computational complexities in these advanced deep neural structures. © 2013 IEEE. %O cited By 1 %L scholars15664 %J IEEE Access %D 2021 %R 10.1109/ACCESS.2021.3110709 %T Pluggable Micronetwork for Layer Configuration Relay in a Dynamic Deep Neural Surface %V 9 %I Institute of Electrical and Electronics Engineers Inc. %A F.U. Khan %A I.B. Aziz %A E.A.P. Akhir %P 124831-124846