Large language models for pattern recognition in text data

Aknur Kosayakova, Kurmashev Ildar, Luigi La Spada, Nida Zeeshan, Makhabbat Bakyt, Moldamurat Khuralay, Omirzak Abdirashev

Abstract


Large language models (LLMs) are widely deployed in settings where both reliability and efficiency matter. We present a calibrated, seed‑robust empirical comparison of an encoder fine‑tuned model (bidirectional encoder representations from transformers (BERT)‑base) and a decoder in‑context model (generative pre-trained transformer (GPT)‑2 small) across Stanford question answering dataset v2.0 (SQuAD v2.0) and general language understanding evaluation (GLUE)-multi-genre natural language inference (MNLI), Stanford sentiment treebank 2 (SST‑2). Beyond accuracy, we assess reliability (expected calibration error with reliability diagrams and confidence–coverage analysis) and efficiency (latency, memory, throughput) under matched conditions and three fixed seeds. BERT‑base yields higher accuracy and lower calibration error, while GPT‑2 narrows gaps under few‑shot prompting but remains more sensitive to prompt design and context length. Efficiency benchmarks show that decoder‑only prompting incurs near‑linear latency/memory growth with k‑shot exemplars, whereas fine‑tuned encoders maintain stable per‑example cost. These findings offer practical guidance on when to prefer fine‑tuning versus prompting and demonstrate that reliability must be evaluated alongside accuracy for risk‑aware deployment.

Keywords


BERT-base; Computational efficiency; Expected calibration error; GPT-2; In context learning; Large language models; Question answering;

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v14.i6.pp5311-5332

Refbacks

  • There are currently no refbacks.


Copyright (c) 2025 Aknur Kossayakova, Kurmashev Ildar, Luigi La Spada, Nida Zeeshan, Makhabbat Bakyt, Moldamurat Khuralay, Omirzak Abdirashev

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES).

View IJAI Stats