26 Sep 2019

Subjects
Research

In natural language processing, words must be given numerical representations before they can be passed as input to a machine learning model. A word is usually represented as a vector (a list of numbers). These word vectors must adequately capture the meaning of the words; semantically related words should have similar numbers. The better these representations, the stronger the performance of the machine learning model is likely to be.

Often, many methods of learning word vectors give a word the same vector regardless of the context that it appears in. However, it is not unusual for a word to have several different meanings. A classic example of this is the word “bank” in the following two sentences:

“I walked along the river bank”

“I deposited some money into my bank account”

If the same vector is used to represent “bank” in both instances, this impairs the performance of the downstream machine learning model which takes these word vectors as input.

Over the last few years, a lot of research has been done on learning contextual word vectors; these are word vectors which vary depending on the context in which the word appears. This enables the same word to have different vectors depending on how it is used in a sentence. A recent paper (Devlin et al, 2018) introduced BERT (Bidirectional Encoder Representations from Transformers), a new way of learning contextual word vectors. BERT utilises a powerful encoder architecture which is capable of modelling longer range dependencies between words. It also proposed an innovative way of capturing the context both before and after the word in the sentence. With these better contextual vectors, BERT achieved state of the art performance on several tasks within natural language processing.

In a recent paper, we proposed a new relation extraction model built on top of BERT. Given any paragraph of text (for example, the abstract of a biomedical journal article), our model will extract all gene-disease pairs which exhibit a pre-specified relation. In our paper, the relations we were interested in concerned the function change experienced by a gene mutation which affects the disease progression. The word vectors supplied by BERT provide our model with a way of encoding the meaning expressed in the text in regard to our entities of interest. We then further fine-tune this encoding so that it can more accurately identify when a paragraph of text contains a gene-disease relation of interest. Such relation extraction models are crucial in drug discovery; there are too many journal articles published every day for a human to read and summarise. A machine learning model capable of automatically extracting relevant gene-disease pairs can greatly accelerate this process.

Publication

Biomedical relation extraction with pre-trained language representations and minimal task-specific architecture

Publication


Back to blog post and videos