TY - CONF KW - Data assimilation; Data integration; Network security; reductions; AI integration; Application integration framework; Data pipeline optimization; Data pipelines; Data processing framework; Error handling; Language model; Pipeline optimization; Token efficiency N2 - Large Language Models (LLMs) have unlocked new opportunities in processing non-structured information. However, integrating LLM into conventional applications poses challenges due to their non-deterministic nature. This paper introduces a framework designed to effectively integrate LLM into intermediate modules by ensuring more consistent and reliable outputs. The framework includes three key components: the Sieve, which captures and retries processing of incorrect outputs; the Circuit Breaker, which stops processing persistently incorrect outputs; and the Optimizer, which enhances processing efficiency by combining inputs into single prompts. Experimental results employing structured methodology demonstrate the framework's effectiveness, achieving significant improvements a 71.05 reduction in processing time and an 82.97 reduction in token usage while maintaining high accuracy. The proposed framework, agnostic to specific LLM implementations, aids the integration of LLMs into diverse applications, enhancing automation and efficiency in fields such as finance, healthcare, and education. © 2024 IEEE. A1 - Ho, Joe Ee A1 - Ooi, Boonyaik Yaik A1 - Westner, Markus K. SP - 398 AV - none SN - 9798331528553 ID - scholars20421 EP - 403 PB - Institute of Electrical and Electronics Engineers Inc. N1 - Cited by: 2 TI - Application Integration Framework for Large Language Models UR - https://www.scopus.com/inward/record.uri?eid=2-s2.0-85209631691&doi=10.1109%2FAiDAS63860.2024.10730541&partnerID=40&md5=e6c7b763073dd52c258251f2542e1756 Y1 - 2024/// ER -