Transfer Learning for Question Answering at IBM Research AI


Speaker: Avi Sil (IBM)

Date and Time: October 25 (Friday)

Place: TBD

Abstract:

Recent Question Answering research involve several algorithmic components such as Attention-over-Attention, coupled with data augmentation and ensembling strategies that have shown to yield state-of-the-art results on benchmark datasets like SQuAD, even achieving super-human performance. However, contrary to these prior results, when we evaluate on the recently proposed Natural Questions benchmark dataset, we find that an incredibly simple approach of transfer learning from BERT outperforms the previous state-of-the-art (SOTA) system trained on 4 million more examples than ours by 1.9 F1 points. Adding ensembling strategies further improves that number by 2.3 F1 points. Finally, I will introduce the IBM TechQA dataset as a benchmark that pushes SOTA QA systems to test their transfer learning capabilities in a low-resource domain adaptation setting.

Bio:

Avi Sil is a Research Scientist and the chair of the NLP community in IBM Research AI. His research interests are in Question Answering and Information Extraction. He has served as an Area Chair for several NLP conferences like ACL, EMNLP, NAACL etc. Currently, he is the technical lead of a team working on QA and Natural Language Understanding tasks. His team is behind the current top performing system on the Google Natural Questions leaderboard and also the SuperGlue submission (pre-RoBERTa phase). His interests also span Entity Recognition and Linking and his systems have won past evaluations at TAC KBP.