%X Word embeddings were created to form meaningful representation for words in an efficient manner. This is an essential step in most of the Natural Language Processing tasks. In this paper, different Malay language word embedding models were trained on Malay text corpus. These models were trained using Word2Vec and fastText using both CBOW and Skip-gram architectures, and GloVe. These trained models were tested on intrinsic evaluation for semantic similarity and word analogies. In the experiment, the custom-trained fastText Skip-gram model achieved 0.5509 for Pearson correlation coefficient at word similarity evaluation, and 36.80 for accuracy at word analogies evaluation. The result outperformed the fastText pre-trained models which only achieved 0.477 and 22.96 for word similarity evaluation and word analogies evaluation, respectively. The result shows that there is still room for improvement in both pre-processing tasks and datasets for evaluation. © 2020 IEEE. %K Correlation methods; Embeddings; Intelligent computing; Semantics, Gram models; Malay languages; Malay texts; NAtural language processing; Pearson correlation coefficients; Pre-processing; Semantic similarity; Word similarity, Natural language processing systems %D 2020 %R 10.1109/ICCI51257.2020.9247707 %O cited By 1; Conference of 2020 International Conference on Computational Intelligence, ICCI 2020 ; Conference Date: 8 October 2020 Through 9 October 2020; Conference Code:164916 %J 2020 International Conference on Computational Intelligence, ICCI 2020 %L scholars12648 %T Assessing Suitable Word Embedding Model for Malay Language through Intrinsic Evaluation %A Y.-T. Phua %A K.-H. Yew %A O.-M. Foong %A M.Y.-W. Teow %I Institute of Electrical and Electronics Engineers Inc. %P 202-210