SMU Data Science Review
Abstract
Machine reading comprehension and question answering are topics of considerable focus in the field of Natural Language Processing (NLP). In recent years, language models like Bidirectional Encoder Representations from Transformers (BERT) [3] have been very successful in language related tasks like question answering. The difficulty of the question answering task lies in developing accurate representations of language and being able to produce answers for questions. In this study, the focus is to investigate how to train and fine tune a BERT model to improve its performance on BioASQ, a challenge on large scale biomedical question answering. Our most accurate BERT model achieved an F1 score of 76.44 on BioASQ, indicating successful performance in biomedical question answering.
Recommended Citation
Fu, Eric R.; Djoko, Rikel; Mansor, Maysam; and Slater, Robert
(2020)
"BERT for Question Answering on BioASQ,"
SMU Data Science Review: Vol. 3:
No.
3, Article 3.
Available at:
https://scholar.smu.edu/datasciencereview/vol3/iss3/3
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License
Included in
Computer and Systems Architecture Commons, Other Computer Engineering Commons, Technology and Innovation Commons