%0 Journal Article %A Babikir, I. %A Elsaadany, M. %A Sajid, M. %A Laudon, C. %D 2023 %F scholars:18437 %J Geoenergy Science and Engineering %K Complex networks; Forestry; Large dataset; Learning algorithms; Neural network models; Offshore oil well production; Sampling; Seismology; Support vector machines, Classification performance; Modeling performance; Sample sizes; Seismic attributes; Seismic facies classification; Supervised machine learning; Training data; Training sample; Training sample size, Classification (of information), experimental study; machine learning; seismic data; seismic source; size distribution; supervised learning %R 10.1016/j.geoen.2023.211809 %T On the training sample size and classification performance: An experimental evaluation in seismic facies classification %U https://khub.utp.edu.my/scholars/18437/ %V 226 %X Machine learning algorithms (MLAs) perform better when enough high-quality training data is provided. However, a lack of training data is frequent in seismic facies classification and many other supervised learning applications. Data labeling for seismic facies classification is time-consuming and requires considerable effort from the domain knowledge expert. This study investigates the effect of training data size on the performance of three popular supervised MLAs used for seismic facies classification. We labeled slices from two seismic datasets of diverse geologic environments and varying classification complexity. AN Field in Malay Basin represents a simple classification problem with three classes, whereas a more complex six classes classification is defined in the Dangerous Grounds (DG) dataset offshore Sabah. The labeled data were constantly reduced by half, resulting in eight training subsets of varying sizes. We trained and evaluated support vector machine (SVM), random forest (RF), and neural network (NN) models using a 10-fold cross-validation (CV) procedure. Performance metrics were computed to study the change in performance in response to the training data size. The experimental results show that, for the DG dataset, where the classification is complex due to the heterogeneous geology and a more number of classes, the larger the training subset, the better the classification performance. Nevertheless, for the simple classification scenario of the AN dataset, the classifiers reached a performance plateau when trained on limited samples. We found that the NN model is the best performer on large datasets. The RF classifier performed well in both datasets. It proved to be robust when trained on limited samples of the DG data. The SVM performed the best where there was a clear margin of separation between the defined classes (the AN data). In contrast, it performed poorly on the DG data and exhibited a performance decline on the AN large subsets. © 2023 Elsevier B.V. %Z cited By 0