Using natural language processing to evaluate the impact of specialized transformers models on medical domain tasks

Soufyane Ayanouz, Boudhir Anouar Abdelhakim, Mohammed Ben Ahmed

Abstract


We are presently living in the age of intelligent machines, machines are rapidly imitating humans as a result of technological breakthroughs and advances in machine learning, deep learning, and artificial intelligence. In our work, we based our approach on the idea of utilizing a specialized corpus to enhance the performance of a pre-trained language model. We utilized the following approach: (V = vocabulary domain, C1 = initial corpus, C2 = specialization corpus). We applied this approach with different combinations such as (V = general, C1 = general, C2 = ∅), (V = general, C1 = general, C2 = medical), (V = medical, C1 = medical, C2 = ∅), and (V = medical, C1 = medical, C2 = medical) to compare the performance of a general bidirectional encoder representations from transformers model and specialized BERT models for the medical domain. In addition, we evaluated the model’s using informatics for integrating biology and the bedside, and drug-drug interaction datasets to measure their effectiveness in medical tasks.

Keywords


Deep learning; Entity recognition; Intelligent machines; Medical; Natural language processing; Relation classification

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v13.i2.pp1732-1740

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

View IJAI Stats