A Comparative Study on Deep Feature Extraction Approaches for Visual Tracking of Industrial Robots

Amosa, T.I. and Sebastian, P. and Izhar, L.I.B. and Ibrahim, O. (2022) A Comparative Study on Deep Feature Extraction Approaches for Visual Tracking of Industrial Robots. In: UNSPECIFIED.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

Owing to several breakthroughs in deep-learning research, visual object trackers based on deep learning algorithms have been the de facto method in the tracking community. However, the performance of this category of trackers is largely influenced by the richness and quality of the target feature representation. Thus, to extract features with high discriminative power, special attention must be paid to the feature extraction component of the tracker. Despite the significance of feature extraction paradigm in deep learning-based trackers, limited studies have been dedicated to comparatively examining the feature learning and representation capabilities of various existing deep feature extraction approaches in visual tracking. In this paper, we present a comparative study on two different deep feature extraction methods in visual tracking of industrial robots in video. This work examines and visualizes the feature representation capability of two commonly employed feature extraction methods in deep learning-based tracking; namely, Convolutional Neural Network (CNN)-based and Transformer-based feature extraction methods. To demonstrate the effectiveness of the deep learning-based tracking paradigm, this study adopts a Siamese-based tracker that is trained end-To-end offline with ILSVRC15 datasets as well as a Transformer-based tracking architecture that is also pre-Trained on the ImageNet dataset. This study also contributes a dataset consisting of images of industrial robots, which was employed for evaluating the two adopted trackers. From this study, we could conclude that the feature learning and representation of the transformer-based feature extraction pipeline largely improved the overall tracking performance of the tracker. © 2022 IEEE.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Additional Information: cited By 1; Conference of 3rd IEEE Industrial Electronics and Applications Conference, IEACon 2022 ; Conference Date: 3 October 2022 Through 4 October 2022; Conference Code:184531
Uncontrolled Keywords: Convolution; Convolutional neural networks; Deep learning; Extraction; Industrial robots; Learning algorithms; Robot vision; Tracking (position), Comparatives studies; Convolutional neural network; Deep learning; Feature extraction methods; Feature learning; Feature representation; Features extraction; Transformer network; Visual object tracking; Visual Tracking, Feature extraction
Depositing User: Mr Ahmad Suhairi UTP
Date Deposited: 19 Dec 2023 03:23
Last Modified: 19 Dec 2023 03:23
URI: https://khub.utp.edu.my/scholars/id/eprint/17348

Actions (login required)

View Item
View Item