There has been significant progress in developing question answering systems in recent years; however, most of these systems implicitly assume that the answer and its evidence appear close together in a single document. How can we teach computers to answer questions that require reasoning over multiple pieces of evidence?
In collaboration with Caiming Xiong, Nitish Shirish Keskar, and Richard Socher, former Salesforce Research Scientist Victor Zhong investigated this question and proposed a new state-of-the-art question answering model that combines evidence across multiple documents in his paper Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering. This research was one of 6 papers from the Salesforce Research Team recently accepted at the Seventh International Conference on Learning Representations (ICLR).
We caught up with Victor to ask him about his research and his reaction to the ICLR acceptance. Check out the Q&A below!
What is your primary focus area?
My primary research area is learning natural language understanding through interactions. Some of my recent research areas include task-oriented dialogue and question answering. I’ve also worked on relation extraction and knowledge base population.
Tell us about your ICLR research.
Our ICLR work Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering (CFC) introduces a new method for multi-evidence question answering. There has been a wealth of recent work on reading comprehension, including our work[1, 2] from Salesforce Research. However, one limitation of recent work is that they tend to be restricted to settings in which the question is answerable given a very local context[3]. More recently, researchers have looked at settings in which the model needs to consider multiple pieces of evidence in order to answer the question[4]. In this work, we proposed a multi-evidence question answering model called the CFC that selects among a set of candidate answers given a set of support documents and a question.
The CFC combines coarse-grain reasoning and a fine-grain reasoning. In coarse-grain reasoning, the model builds a coarse summary of support documents conditioned on the question without knowing what candidates are available, then scores each candidate. In fine-grain reasoning, the model matches specific fine-grain contexts in which the candidate is mentioned with the question in order to gauge the relevance of the candidate. These two strategies of reasoning are respectively modeled by the coarse-grain and fine-grain modules of the CFC. Each module employs a novel hierarchical attention — a hierarchy of coattention and self-attention — to combine information from the support documents conditioned on the question and candidates. The CFC achieves a new state-of-the-art result on the Qangaroo WikiHop task of 70.6% accuracy, beating previous best by 3% accuracy despite not using pretrained contextual encoders.
What was your reaction upon finding out your research had been selected for ICLR?
I am excited to share this work with the wider research community at ICLR. I think while we have made significant progress in textual question answering and reading comprehension over the last decade as a field, there remains a substantial amount of work to make these techniques robust and scalable to an extent that they can be used in practice. I’m excited to discuss future works in this area with our friends and colleagues in the research community at ICLR.
How might your research benefit the average human?
In the grand scheme of information systems, we would like a system that aggregates evidence across a large collection of documents (e.g. the internet) instead of observing each document in isolation. Our work is a step towards this direction of multi-evidence reasoning.
What’s something that really excites you right now within the world of AI and Machine Learning?
I think the emergence of unsupervised/semi-supervised pretraining in NLP has been a fantastic boon to our community. It is enabling us to pursue higher level tasks by leveraging the huge statistical patterns encoded through reading vast amounts of text.
Learn more about the Salesforce Research Team and view available roles at einstein.ai/careers.