%0 Journal Article %@ 21693536 %A Kehkashan, T. %A Alsaeedi, A. %A Yafooz, W.M.S. %A Ismail, N.A. %A Al-Dhaqm, A. %D 2024 %F scholars:20206 %I Institute of Electrical and Electronics Engineers Inc. %J IEEE Access %K Extraction; Human computer interaction; Learning systems; Long short-term memory, Combinatorial analysis; Deep learning; Features extraction; Machine-learning; Performance evaluation metrics; Performances evaluation; Systematic; Systematic literature review; Video analysis, Feature extraction %P 35048-35080 %R 10.1109/ACCESS.2024.3357980 %T Combinatorial Analysis of Deep Learning and Machine Learning Video Captioning Studies: A Systematic Literature Review %U https://khub.utp.edu.my/scholars/20206/ %V 12 %X Recent improvements formulated in the area of video captioning have brought rapid revolutions in its methods and the performance of its models. Machine learning and deep learning techniques are both employed in this regard. However, there is a lack of tracing the latest studies and their remarkable results. Although several studies have been proposed employing the ML and DL algorithms in different other areas, there is no systematic review utilizing the video captioning task. This study aims to examine, evaluate, and synthesize the primary studies into a thorough Systematic Literature Review (SLR) that provides a general overview of the methods used for video captioning. We performed the SLR to determine the research problems under which machine learning models were preferred over the deep learning models and vice versa. We collected a total of 1,656 studies retrieved from four electronic databases; Scopus, WoS, IEEE Xplore, and ACM, based on our search string from which 162 published studies passed the selection criteria related to one primary and two secondary research questions after a systematic process. Moreover, insufficient data collection and inefficient comparison of results are common issues identified during the review process. We conclude that the 2D/3D CNN for video feature extraction and LSTM for caption generation, METEOR and BLEU performance evaluation tools, and MSVD dataset are most frequently employed for video captioning. Our study is the pioneer in comparing the implementation of ML and DL algorithms employing the video captioning area. Thus, our study will accelerate the critical assessment of the state-of-the-art in other research fields of video analysis and human-computer interaction. © 2013 IEEE. %Z cited By 0