%T Fine-tuning Multilingual Transformers for Hausa-English Sentiment Analysis %A A. Yusuf %A A. Sarlan %A K.U. Danyaro %A A.S.B.A. Rahman %P 13-18 %K Optimal systems; Statistical tests, Code-switching; Fine tuning; Hausa; Low resource languages; Low-resource; Pre-trained; Sentiment analysis; Sentiment classification; Switching phenomenon; Transformer, Sentiment analysis %X Accurate sentiment analysis is greatly hindered by the code-switching phenomena, especially in the setting low resource language such as the Hausa. However, the majority of previous studies on Hausa sentiment analysis have mainly ignored this problem. This study explores the use of transformer fine-tuning techniques for Hausa language sentiment classification tasks using three pre-trained multilingual language models: Roberta, XLM-R, and mBERT. A multilabel sentiment classification was conducted using Python programming language and TensorFlow library, with a GPU hardware accelerator on Google Collaboratory. The Twitter dataset used in this study contains 16849 train and 2677 unlabeled dev and 5303 test unlabeled samples of tweets/accounts, each labelled with positive, negative, and neutral respectively for train set data. The findings demonstrate that the mBERT-base-cased model gets the maximum accuracy and F1-score of 0.73 and 0.73, respectively, outperforming the other two pre-trained models. The train and validation accuracy graph of the mBERT model shows improvement over time. The study underscores the importance of tailoring the implementation code to meet specific requirements and the significance of fine-tuning pre-trained models for optimal performance. © 2023 IEEE. %O cited By 3; Conference of 13th International Conference on Information Technology in Asia, CITA 2023 ; Conference Date: 3 August 2023 Through 4 August 2023; Conference Code:193023 %J 2023 13th International Conference on Information Technology in Asia, CITA 2023 %L scholars19111 %D 2023 %R 10.1109/CITA58204.2023.10262742