A new history matching sensitivity analysis framework with random forests and Plackett-Burman design

Aulia, A. and Jeong, D. and Mohd Saaid, I. and Shuker, M.T. and El-Khatib, N.A. (2017) A new history matching sensitivity analysis framework with random forests and Plackett-Burman design. In: UNSPECIFIED.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

To improve the current industry standard one-parameter-at-a-time sensitivity analysis method, we propose a new sensitivity analysis framework that utilizes Plackett-Burman design and Random Forests (a wellknown data mining method). The new framework significantly reduces the number of required simulation runs (i.e. samples), and at the same time compellingly improves the automatic history matching error. The proposed sensitivity analysis framework starts with generating samples/simulations using Plackett- Burman design. Here, each simulation is executed based on different combinations of the parameters' input values. Once the samples are ready, the parameters' input values and the target vector (i.e. the history matching error vector) are used to construct a Random Forests model. This model is used to rank the importance of each history matching parameter. Hence, parameters with low impact on the history matching error are discarded, and the remaining are used for Genetic Algorithm-based automatic history matching. The impact of an internal Random Forests parameter (number of decision trees) on the history matching error is also observed. The highly faulted reservoir with water injection is used for history matching and 10 parameters are defined uncertain consisting of fault transmissibilities, permeabilities, and connate water saturation. The aim is to match the bottom hole pressure and watercut for several oil producing wells. The one-parameterat- a-time method requires 21 samples, and the selected top 4 parameters from this method are mainly fault transmissibilities. The resulting automatic history matching results gave a final error of 463.338. With the new framework, which requires only 12 samples instead of 21, the final error was 628.041, given that 100 trees are utilized. As we increased the number of trees to 500, the final error is significantly better (i.e. 36 improvement) than the results obtained from the one-parameter-at-a-time method. We can see that using the new framework, the majority of the top 4 parameters are related to permeabilities. Here, there is no significant change in terms of computation time as we increase the number of trees from 100 to 500; total computation time is less than 10 seconds. For this case study, not only this new framework requires significantly less number of initial simulation samples, it significantly improved the history matching error (as compared to the industry standard oneparameter- at-time sensitivity analysis method). Therefore, practicing engineers can utilize this framework to save time and simultaneously improve accuracy of the history matching parameter ranking. Parameter ranking computation with Random Forests takes less than 10 seconds.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Additional Information: cited By 7; Conference of SPE Symposium: Production Enhancement and Cost Optimisation 2017 ; Conference Date: 7 November 2017 Through 8 November 2017; Conference Code:133067
Uncontrolled Keywords: Bottom hole pressure; Data mining; Decision trees; Errors; Forestry; Genetic algorithms; Parameter estimation; Petroleum reservoirs; Uncertainty analysis, Automatic History Matching; Data mining methods; Generating samples; History matching; Industry standards; Plackett-Burman designs; Practicing engineers; Total computation time, Sensitivity analysis
Depositing User: Mr Ahmad Suhairi UTP
Date Deposited: 09 Nov 2023 16:20
Last Modified: 09 Nov 2023 16:20
URI: https://khub.utp.edu.my/scholars/id/eprint/9015

Actions (login required)

View Item
View Item