What Makes My Model Perplexed? A Linguistic Investigation on Neural Language Models Perplexity

Abstract

This paper presents an investigation aimed at studying how the linguistic structure of a sentence affects the perplexity of two of the most popular Neural Language Models (NLMs), BERT and GPT-2. We first compare the sentence-level likelihood computed with BERT and the GPT-2’s perplexity showing that the two metrics are correlated. In addition, we exploit linguistic features capturing a wide set of morpho-syntactic and syntactic phenomena showing how they contribute to predict the perplexity of the two NLMs.

Publication
In Proceedings of the 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures (NAACL 2021, Online)
Alessio Miaschi
Alessio Miaschi
PostDoc in Natural Language Processing