Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation

Abstract

In this paper we present a comparison between the linguistic knowledge encoded in the internal representations of a contextual Language Model (BERT) and a contextual-independent one (Word2vec). We use a wide set of probing tasks, each of which corresponds to a distinct sentence-level feature extracted from different levels of linguistic annotation. We show that, although BERT is capable of understanding the full context of each word in an input sequence, the implicit knowledge encoded in its aggregated sentence representations is still comparable to that of a contextual-independent model. We also find that BERT is able to encode sentence-level properties even within single-word embeddings, obtaining comparable or even superior results than those obtained with sentence representations.

Publication
In Proceedings of the 5th Workshop on Representation Learning for NLP (ACL 2020, Online)
Alessio Miaschi
Alessio Miaschi
PostDoc in Natural Language Processing