Awesome Question Answering
A curated list of the
Question Answering (QA)
subject which is a computer science discipline within the fields of
information retrieval and natural language processing (NLP) toward using
machine learning and deep learning
정보 검색 및 자연 언어 처리 분야의 질의응답에 관한 큐레이션 -
머신러닝과 딥러닝 단계까지
问答系统主题的精选列表,是信息检索和自然语言处理领域的计算机科学学科 -
使用机器学习和深度学习
Contents
Recent Trends
Recent QA Models
-
DilBert: Delaying Interaction Layers in Transformer-based Encoders for
Efficient Open Domain Question Answering (2020)
- paper: https://arxiv.org/pdf/2010.08422.pdf
- github: https://github.com/wissam-sib/dilbert
-
UnifiedQA: Crossing Format Boundaries With a Single QA System (2020)
- Demo: https://unifiedqa.apps.allenai.org/
-
ProQA: Resource-efficient method for pretraining a dense corpus index
for open-domain QA and IR. (2020)
- paper: https://arxiv.org/pdf/2005.00038.pdf
- github: https://github.com/xwhan/ProQA
-
TYDI QA: A Benchmark for Information-Seeking Question Answering in
Typologically Diverse Languages (2020)
- paper: https://arxiv.org/ftp/arxiv/papers/2003/2003.05002.pdf
-
Retrospective Reader for Machine Reading Comprehension
- paper: https://arxiv.org/pdf/2001.09694v2.pdf
-
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer
Sentence Selection (AAAI 2020)
-
paper: https://arxiv.org/pdf/1911.04118.pdf ### Recent Language
Models
-
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators, Kevin Clark, et al., ICLR, 2020.
-
TinyBERT: Distilling BERT for Natural Language Understanding, Xiaoqi Jiao, et al., ICLR, 2020.
-
MINILM: Deep Self-Attention Distillation for Task-Agnostic
Compression of Pre-Trained Transformers, Wenhui Wang, et al., arXiv, 2020.
-
T5: Exploring the Limits of Transfer Learning with a Unified
Text-to-Text Transformer, Colin Raffel, et al., arXiv preprint, 2019.
-
ERNIE: Enhanced Language Representation with Informative Entities, Zhengyan Zhang, et al., ACL, 2019.
-
XLNet: Generalized Autoregressive Pretraining for Language
Understanding, Zhilin Yang, et al., arXiv preprint, 2019.
-
ALBERT: A Lite BERT for Self-supervised Learning of Language
Representations, Zhenzhong Lan, et al., arXiv preprint, 2019.
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach, Yinhan Liu, et al., arXiv preprint, 2019.
-
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
lighter, Victor sanh, et al., arXiv, 2019.
-
SpanBERT: Improving Pre-training by Representing and Predicting
Spans, Mandar Joshi, et al., TACL, 2019.
-
BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding, Jacob Devlin, et al., NAACL 2019, 2018. ### AAAI 2020
-
TANDA: Transfer and Adapt Pre-Trained Transformer Models for
Answer Sentence Selection, Siddhant Garg, et al., AAAI 2020, Nov 2019. ### ACL 2019
-
Overview of the MEDIQA 2019 Shared Task on Textual Inference,
Question Entailment and Question Answering, Asma Ben Abacha, et al., ACL-W 2019, Aug 2019.
-
Towards Scalable and Reliable Capsule Networks for Challenging
NLP Applications, Wei Zhao, et al., ACL 2019, Jun 2019.
-
Cognitive Graph for Multi-Hop Reading Comprehension at Scale, Ming Ding, et al., ACL 2019, Jun 2019.
-
Real-Time Open-Domain Question Answering with Dense-Sparse Phrase
Index, Minjoon Seo, et al., ACL 2019, Jun 2019.
-
Unsupervised Question Answering by Cloze Translation, Patrick Lewis, et al., ACL 2019, Jun 2019.
-
SemEval-2019 Task 10: Math Question Answering, Mark Hopkins, et al., ACL-W 2019, Jun 2019.
-
Improving Question Answering over Incomplete KBs with
Knowledge-Aware Reader, Wenhan Xiong, et al., ACL 2019, May 2019.
-
Matching Article Pairs with Graphical Decomposition and
Convolutions, Bang Liu, et al., ACL 2019, May 2019.
-
Episodic Memory Reader: Learning what to Remember for Question
Answering from Streaming Data, Moonsu Han, et al., ACL 2019, Mar 2019.
-
Natural Questions: a Benchmark for Question Answering Research, Tom Kwiatkowski, et al., TACL 2019, Jan 2019.
-
Textbook Question Answering with Multi-modal Context Graph
Understanding and Self-supervised Open-set Comprehension, Daesik Kim, et al., ACL 2019, Nov 2018. ### EMNLP-IJCNLP 2019
-
Language Models as Knowledge Bases?, Fabio Petron, et al., EMNLP-IJCNLP 2019, Sep 2019.
-
LXMERT: Learning Cross-Modality Encoder Representations from
Transformers, Hao Tan, et al., EMNLP-IJCNLP 2019, Dec 2019.
-
Answering Complex Open-domain Questions Through Iterative Query
Generation, Peng Qi, et al., EMNLP-IJCNLP 2019, Oct 2019.
-
KagNet: Knowledge-Aware Graph Networks for Commonsense
Reasoning, Bill Yuchen Lin, et al., EMNLP-IJCNLP 2019, Sep 2019.
-
Mixture Content Selection for Diverse Sequence Generation, Jaemin Cho, et al., EMNLP-IJCNLP 2019, Sep 2019.
-
A Discrete Hard EM Approach for Weakly Supervised Question
Answering, Sewon Min, et al., EMNLP-IJCNLP, 2019, Sep 2019. ### Arxiv
-
Investigating the Successes and Failures of BERT for Passage
Re-Ranking, Harshith Padigela, et al., arXiv preprint, May 2019.
-
BERT with History Answer Embedding for Conversational Question
Answering, Chen Qu, et al., arXiv preprint, May 2019.
-
Understanding the Behaviors of BERT in Ranking, Yifan Qiao, et al., arXiv preprint, Apr 2019.
-
BERT Post-Training for Review Reading Comprehension and
Aspect-based Sentiment Analysis, Hu Xu, et al., arXiv preprint, Apr 2019.
-
End-to-End Open-Domain Question Answering with BERTserini, Wei Yang, et al., arXiv preprint, Feb 2019.
-
A BERT Baseline for the Natural Questions, Chris Alberti, et al., arXiv preprint, Jan 2019.
-
Passage Re-ranking with BERT, Rodrigo Nogueira, et al., arXiv preprint, Jan 2019.
-
SDNet: Contextualized Attention-based Deep Network for
Conversational Question Answering, Chenguang Zhu, et al., arXiv, Dec 2018. ### Dataset
-
ELI5: Long Form Question Answering, Angela Fan, et al., ACL 2019, Jul 2019
-
CODAH: An Adversarially-Authored Question Answering Dataset for
Common Sense, Michael Chen, et al., RepEval 2019, Jun 2019.
About QA
Types of QA
- Single-turn QA: answer without considering any context
-
Conversational QA: use previsous conversation turns #### Subtypes of QA
- Knowledge-based QA
- Table/List-based QA
- Text-based QA
- Community-based QA
- Visual QA
Analysis and Parsing for Pre-processing in QA systems
Lanugage Analysis 1.
Morphological analysis
2.
Named Entity Recognition(NER)
3. Homonyms / Polysemy Analysis 4. Syntactic Parsing (Dependency Parsing)
5. Semantic Recognition
Most QA systems have roughly 3 parts
-
Fact extraction
-
Entity Extraction
-
Named-Entity Recognition(NER)
-
Relation Extraction
- Understanding the question
- Generating an answer
Events
- Wolfram Alpha launced the answer engine in 2009.
-
IBM Watson system defeated top
Jeopardy! champions in
2011.
- Apple’s Siri integrated Wolfram Alpha’s answer engine in 2011.
-
Google embraced QA by launching its Knowledge Graph, leveraging the free
base knowledge base in 2012.
-
Amazon Echo | Alexa (2015), Google Home | Google Assistant (2016),
INVOKE | MS Cortana (2017), HomePod (2017)
Systems
-
IBM Watson - Has
state-of-the-arts performance.
-
Facebook DrQA -
Applied to the SQuAD1.0 dataset. The SQuAD2.0 dataset has released. but
DrQA is not tested yet.
-
MIT media lab’s Knowledge graph - Is
a freely-available semantic network, designed to help computers
understand the meanings of words that people use.
Competitions in QA
0 |
Story Cloze Test
|
English |
Univ. of Rochester |
2016 |
msap |
Logistic regression |
Closed |
x |
1 |
MS MARCO |
English |
Microsoft |
2016 |
YUANFUDAO research NLP |
MARS |
Closed |
o |
2 |
MS MARCO V2 |
English |
Microsoft |
2018 |
NTT Media Intelli. Lab. |
Masque Q&A Style |
Opened |
x |
3 |
SQuAD |
English |
Univ. of Stanford |
2018 |
XLNet (single model) |
XLNet Team |
Closed |
o |
4 |
SQuAD 2.0
|
English |
Univ. of Stanford |
2018 |
PINGAN Omni-Sinitic |
ALBERT + DAAF + Verifier (ensemble) |
Opened |
o |
5 |
TriviaQA |
English |
Univ. of Washington |
2017 |
Ming Yan |
- |
Closed |
- |
6 |
decaNLP |
English |
Salesforce Research |
2018 |
Salesforce Research |
MQAN |
Closed |
x |
7 |
DuReader Ver1.
|
Chinese |
Baidu |
2015 |
Tryer |
T-Reader (single) |
Closed |
x |
8 |
DuReader Ver2.
|
Chinese |
Baidu |
2017 |
renaissance |
AliReader |
Opened |
- |
9 |
KorQuAD
|
Korean |
LG CNS AI Research |
2018 |
Clova AI LaRva Team |
LaRva-Kor-Large+ + CLaF (single) |
Closed |
o |
10 |
KorQuAD 2.0 |
Korean |
LG CNS AI Research |
2019 |
Kangwon National University |
KNU-baseline(single model) |
Opened |
x |
11 |
CoQA |
English |
Univ. of Stanford |
2018 |
Zhuiyi Technology |
RoBERTa + AT + KD (ensemble) |
Opened |
o |
Publications
-
Papers
-
-
“Learning to Skim Text”, Adams Wei Yu, Hongrae Lee, Quoc V. Le, 2017.
- Show only what you want in Text
-
“Deep Joint Entity Disambiguation with Local Neural Attention”, Octavian-Eugen Ganea and Thomas Hofmann, 2017.
-
“BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION”, Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hananneh
Hajishirzi, ICLR, 2017.
-
“Capturing Semantic Similarity for Entity Linking with
Convolutional Neural Networks”, Matthew Francis-Landau, Greg Durrett and Dan Klei, NAACL-HLT
2016.
- https://GitHub.com/matthewfl/nlp-entity-convnet
-
“Entity Linking with a Knowledge Base: Issues, Techniques, and
Solutions”, Wei Shen, Jianyong Wang, Jiawei Han, IEEE Transactions on
Knowledge and Data Engineering(TKDE), 2014.
-
“Introduction to “This is Watson”, IBM Journal of Research and Development, D. A. Ferrucci, 2012.
-
“A survey on question answering technology from an information
retrieval perspective”, Information Sciences, 2011.
-
“Question Answering in Restricted Domains: An Overview”, Diego Mollá and José Luis Vicedo, Computational Linguistics, 2007
-
“Natural language question answering: the view from here”, L Hirschman, R Gaizauskas, natural language engineering, 2001.
- Entity Disambiguation / Entity Linking
Codes
-
BiDAF -
Bi-Directional Attention Flow (BIDAF) network is a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization.
- Official; Tensorflow v1.2
- Paper
-
QANet - A Q&A
architecture does not require recurrent networks: Its encoder consists
exclusively of convolution and self-attention, where convolution models
local interactions and self-attention models global interactions.
- Google; Unofficial; Tensorflow v1.5
- Paper
-
R-Net - An
end-to-end neural networks model for reading comprehension style
question answering, which aims to answer questions from a given passage.
- MS; Unofficially by HKUST; Tensorflow v1.5
-
Paper
-
R-Net-in-Keras
- R-NET re-implementation in Keras.
- MS; Unofficial; Keras v2.0.6
-
Paper
-
DrQA - DrQA is a system
for reading comprehension applied to open-domain question answering.
- Facebook; Official; Pytorch v0.4
- Paper
-
BERT - A new
language representation model which stands for Bidirectional Encoder
Representations from Transformers. Unlike recent language representation
models, BERT is designed to pre-train deep bidirectional representations
by jointly conditioning on both left and right context in all layers.
- Google; Official implementation; Tensorflow v1.11.0
- Paper
Lectures
Slides
Dataset Collections
Datasets
-
AI2 Science Questions v2.1(2017)
-
It consists of questions used in student assessments in the United
States across elementary and middle school grade levels. Each
question is 4-way multiple choice format and may or may not include
a diagram element.
-
Paper:
http://ai2-website.s3.amazonaws.com/publications/AI2ReasoningChallenge2018.pdf
-
Children’s Book Test
-
It is one of the bAbI project of Facebook AI Research which is organized
towards the goal of automatic text understanding and reasoning. The CBT
is designed to measure directly how well language models can exploit
wider linguistic context.
- CODAH Dataset
-
DeepMind Q&A Dataset; CNN/Daily Mail
-
Hermann et al. (2015) created two awesome datasets using news
articles for Q&A research. Each dataset contains many documents
(90k and 197k each), and each document companies on average 4
questions approximately. Each question is a sentence with one
missing word/phrase which can be found from the accompanying
document/context.
- Paper: https://arxiv.org/abs/1506.03340
-
ELI5
- Paper: https://arxiv.org/abs/1907.09190
-
GraphQuestions
-
On generating Characteristic-rich Question sets for QA evaluation.
-
LC-QuAD
-
It is a gold standard KBQA (Question Answering over Knowledge Base)
dataset containing 5000 Question and SPARQL queries. LC-QuAD uses
DBpedia v04.16 as the target KB.
-
MS MARCO
- This is for real-world question answering.
- Paper: https://arxiv.org/abs/1611.09268
-
MultiRC
- A dataset of short paragraphs and multi-sentence questions
- Paper: http://cogcomp.org/page/publication_view/833
-
NarrativeQA
-
It includes the list of documents with Wikipedia summaries, links to
full stories, and questions and answers.
- Paper: https://arxiv.org/pdf/1712.07040v1.pdf
-
NewsQA
- A machine comprehension dataset
- Paper: https://arxiv.org/pdf/1611.09830.pdf
-
Qestion-Answer Dataset by CMU
-
This is a corpus of Wikipedia articles, manually-generated factoid
questions from them, and manually-generated answers to these
questions, for use in academic research. These data were collected
by Noah Smith, Michael Heilman, Rebecca Hwa, Shay Cohen, Kevin
Gimpel, and many students at Carnegie Mellon University and the
University of Pittsburgh between 2008 and 2010.
-
SQuAD1.0
-
Stanford Question Answering Dataset (SQuAD) is a reading
comprehension dataset, consisting of questions posed by crowdworkers
on a set of Wikipedia articles, where the answer to every question
is a segment of text, or span, from the corresponding reading
passage, or the question might be unanswerable.
- Paper: https://arxiv.org/abs/1606.05250
-
SQuAD2.0
-
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000
new, unanswerable questions written adversarially by crowdworkers to
look similar to answerable ones. To do well on SQuAD2.0, systems
must not only answer questions when possible, but also determine
when no answer is supported by the paragraph and abstain from
answering.
- Paper: https://arxiv.org/abs/1806.03822
-
Story cloze test
-
‘Story Cloze Test’ is a new commonsense reasoning framework for
evaluating story understanding, story generation, and script
learning. This test requires a system to choose the correct ending
to a four-sentence story.
- Paper: https://arxiv.org/abs/1604.01696
-
TriviaQA
-
TriviaQA is a reading comprehension dataset containing over 650K
question-answer-evidence triples. TriviaQA includes 95K
question-answer pairs authored by trivia enthusiasts and
independently gathered evidence documents, six per question on
average, that provide high quality distant supervision for answering
the questions.
- Paper: https://arxiv.org/abs/1705.03551
-
WikiQA
-
A publicly available set of question and sentence pairs for
open-domain question answering.
The DeepQA Research Team in IBM Watson’s publication within 5 years
-
2015
-
“Automated Problem List Generation from Electronic Medical Records
in IBM Watson”, Murthy Devarakonda, Ching-Huei Tsou, IAAI, 2015.
-
“Decision Making in IBM Watson Question Answering”, J. William
Murdock, Ontology summit, 2015.
-
“Unsupervised Entity-Relation Analysis in IBM Watson”, Aditya Kalyanpur, J William Murdock, ACS, 2015.
-
“Commonsense Reasoning: An Event Calculus Based Approach”, E T
Mueller, Morgan Kaufmann/Elsevier, 2015.
-
2014
-
“Problem-oriented patient record summary: An early report on a
Watson application”, M. Devarakonda, Dongyang Zhang, Ching-Huei
Tsou, M. Bornea, Healthcom, 2014.
-
“WatsonPaths: Scenario-based Question Answering and Inference
over Unstructured Information”, Adam Lally, Sugato Bachi, Michael A. Barborak, David W. Buchanan,
Jennifer Chu-Carroll, David A. Ferrucci*, Michael R. Glass, Aditya
Kalyanpur, Erik T. Mueller, J. William Murdock, Siddharth
Patwardhan, John M. Prager, Christopher A. Welty, IBM Research
Report RC25489, 2014.
-
“Medical Relation Extraction with Manifold Models”, Chang Wang and James Fan, ACL, 2014.
MS Research’s publication within 5 years
-
2018
-
“Characterizing and Supporting Question Answering in Human-to-Human
Communication”, Xiao Yang, Ahmed Hassan Awadallah, Madian Khabsa,
Wei Wang, Miaosen Wang, ACM SIGIR, 2018.
-
“FigureQA: An Annotated Figure Dataset for Visual Reasoning”, Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Akos
Kadar, Adam Trischler, Yoshua Bengio, ICLR, 2018
-
2017
-
“Multi-level Attention Networks for Visual Question Answering”,
Dongfei Yu, Jianlong Fu, Tao Mei, Yong Rui, CVPR, 2017.
-
“A Joint Model for Question Answering and Question Generation”, Tong
Wang, Xingdi (Eric) Yuan, Adam Trischler, ICML, 2017.
-
“Two-Stage Synthesis Networks for Transfer Learning in Machine
Comprehension”, David Golub, Po-Sen Huang, Xiaodong He, Li Deng,
EMNLP, 2017.
-
“Question-Answering with Grammatically-Interpretable
Representations”, Hamid Palangi, Paul Smolensky, Xiaodong He, Li
Deng,
-
“Search-based Neural Structured Learning for Sequential Question
Answering”, Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang, ACL, 2017.
-
2016
-
“Stacked Attention Networks for Image Question Answering”, Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola,
CVPR, 2016.
-
“Question Answering with Knowledge Base, Web and Beyond”, Yih, Scott Wen-tau and Ma, Hao, ACM SIGIR, 2016.
-
“NewsQA: A Machine Comprehension Dataset”, Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro
Sordoni, Philip Bachman, Kaheer Suleman, RepL4NLP, 2016.
-
“Table Cell Search for Question Answering”, Sun, Huan and Ma, Hao and He, Xiaodong and Yih, Wen-tau and Su,
Yu and Yan, Xifeng, WWW, 2016.
-
2015
-
“WIKIQA: A Challenge Dataset for Open-Domain Question
Answering”, Yi Yang, Wen-tau Yih, and Christopher Meek, EMNLP, 2015.
-
“Web-based Question Answering: Revisiting AskMSR”, Chen-Tse Tsai, Wen-tau Yih, and Christopher J.C. Burges, MSR-TR,
2015.
-
“Open Domain Question Answering via Semantic Enrichment”, Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and
Ming-Wei Chang, WWW, 2015.
-
2014
-
“An Overview of Microsoft Deep QA System on Stanford WebQuestions
Benchmark”, Zhenghao Wang, Shengquan Yan, Huaming Wang, and Xuedong Huang,
MSR-TR, 2014.
-
“Semantic Parsing for Single-Relation Question Answering”, Wen-tau Yih, Xiaodong He, Christopher Meek, ACL, 2014.
Google AI’s publication within 5 years
-
2018
-
Google QA
-
“QANet: Combining Local Convolution with Global
Self-Attention for Reading Comprehension”, Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai
Chen, Mohammad Norouzi, Quoc V. Le, ICLR, 2018.
-
“Ask the Right Questions: Active Question Reformulation with
Reinforcement Learning”, Christian Buck and Jannis Bulian and Massimiliano Ciaramita
and Wojciech Paweł Gajewski and Andrea Gesmundo and Neil Houlsby
and Wei Wang, ICLR, 2018.
-
“Building Large Machine Reading-Comprehension Datasets using
Paragraph Vectors”, Radu Soricut, Nan Ding, 2018.
-
Sentence representation
-
“Did the model understand the question?”, Pramod K. Mudrakarta and Ankur Taly and Mukund Sundararajan and
Kedar Dhamdhere, ACL, 2018.
-
2017
-
2014
-
“Great Question! Question Quality in Community Q&A”, Sujith Ravi
and Bo Pang and Vibhor Rastogi and Ravi Kumar, ICWSM, 2014.
Facebook AI Research’s publication within 5 years
-
2018
-
Embodied Question Answering, Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi
Parikh, and Dhruv Batra, CVPR, 2018
-
Do explanations make VQA models more predictable to a human?, Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit
Chattopadhyay, and Devi Parikh, EMNLP, 2018
-
Neural Compositional Denotational Semantics for Question
Answering, Nitish Gupta, Mike Lewis, EMNLP, 2018
-
2017
Books
-
Natural Language Question Answering system Paperback - Boris Galitsky
(2003)
- New Directions in Question Answering - Mark T. Maybury (2004)
-
Part 3. 5. Question Answering in The Oxford Handbook of Computational
Linguistics - Sanda Harabagiu and Dan Moldovan (2005)
-
Chap.28 Question Answering in Speech and Language Processing - Daniel
Jurafsky & James H. Martin (2017)
Links
Contributing
Contributions welcome! Read the
contribution guidelines first.
License
To the extent possible under law,
seriousmac (the maintainer)
has waived all copyright and related or neighboring rights to this work.