Skip to Content

Multiple Different Natural Language Processing Tasks in a Single Deep Model

[Image: apinan / Adobe Stock]

Humans learn natural languages, such as English, starting from basic grammar to complex semantics in a single brain.

Humans learn natural languages, such as English, starting from basic grammar to complex semantics in a single brain. How do we build such a single model to handle a variety of natural language processing (NLP) tasks in computers? As a first step towards this goal, we have developed a single deep neural network model which can learn five different NLP tasks and achieve state-of-the-art results. Our model starts from basic tasks and gradually moves to more complex tasks, which is continuously repeated until our model finishes learning all of the tasks. By doing this, our model allows the tasks to interact each other with the potential to improve accuracy of all of the tasks. Further details can be found in our paper.

What can our model do?

Let’s process the example sentence: “A man with a jersey is dunking the ball at a basketball game” using our model.

Our model first analyzes grammatical roles (part of speeches) of the words in the sentence. A part-of-speech tag is assigned for each word; for example, the tag NN means that the corresponding word is a noun. The task is one of the most basic and fundamental NLP tasks.

Next, the syntactic phrases (chunks) in the sentence are identified; for example, the tag NP means that the corresponding span represents a noun phrase. The beyond word-level task can handle syntactically motivated phrases.

Subsequently, the syntactic relationships (dependencies) between the words are specified by using the previously analyzed results; for example, the arrow from the symbol “ROOT” to the word “dunking” means that the word “dunking” plays the central role when representing the sentence, and the subject (nsubj) of the action is a man and the object (dobj) of the action is a ball. This task is fundamental in analyzing sentence structures and in detecting relationships between words.

Now that our model has finished syntactically analyzing the single sentence, it is time to go beyond the single sentence level tasks. In the next phases, our model tries to understand semantic relationships between two sentences. Let us denote the above example sentence as the sentence A and take another sentence B:

“The ball is being dunked by a man with a jersey at a basketball game”.

In the semantic tasks, our model predicts how much the two sentences are semantically related to each other, by using a real-valued score ranging from 1 to 5. The higher the score is, the more semantically related the sentence pair is. The score for the two sentences A and B is 4.9 because the two sentences describe the same situation. In addition, our model can recognize whether the sentence A entails the sentence B or not. Here, our model says that the sentence A entails the sentence B.

State-of-the-art results on multiple tasks by a single model

Like this, our single model can handle different levels of NLP tasks. Now one question arises: how accurate is our model on the five different tasks? As shown in our paper, our single model achieves state-of-the-art results on syntactic chunking, dependency parsing, semantic relatedness, and textual entailment. Our part-of-speech tagging score is also competitive with state-of-the-art results. These results are notable because many of the previous models were designed to handle single or a few related tasks to achieve the best results only for the limited number of the tasks.

Citation credit

Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, Richard Socher
A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks

Get the latest articles in your inbox.