Linguistic Profiling of a Neural Language Model

Abstract

In this paper we investigate the linguistic knowledge learned by a Neural Language Model (NLM) before and after a fine-tuning process and how this knowledge affects its predictions during several classification problems. We use a wide set of probing tasks, each of which corresponds to a distinct sentence-level feature extracted from different levels of linguistic annotation. We show that BERT is able to encode a wide range of linguistic characteristics, but it tends to lose this information when trained on specific downstream tasks. We also find that BERT’s capacity to encode different kind of linguistic properties has a positive influence on its predictions: the more it stores readable linguistic information of a sentence, the higher will be its capacity of predicting the expected label assigned to that sentence.

Publication
In Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020, Online) [Outstanding Paper for COLING 2020]
Alessio Miaschi
Alessio Miaschi
PostDoc in Natural Language Processing