•  
  •  
 

SMU Data Science Review

Abstract

Machine reading comprehension and question answering are topics of considerable focus in the field of Natural Language Processing (NLP). In recent years, language models like Bidirectional Encoder Representations from Transformers (BERT) [3] have been very successful in language related tasks like question answering. The difficulty of the question answering task lies in developing accurate representations of language and being able to produce answers for questions. In this study, the focus is to investigate how to train and fine tune a BERT model to improve its performance on BioASQ, a challenge on large scale biomedical question answering. Our most accurate BERT model achieved an F1 score of 76.44 on BioASQ, indicating successful performance in biomedical question answering.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS