Is Neural Language Model Perplexity Related to Readability?

Abstract

This paper explores the relationship between Neural Language Model (NLM) perplexity and sentence readability. Starting from the evidence that NLMs implicitly acquire sophisticated linguistic knowledge from a huge amount of training data, our goal is to investigate whether perplexity is affected by linguistic features used to automatically assess sentence readability and if there is a correlation between the two metrics. Our findings suggest that this correlation is actually quite weak and the two metrics are affected by different linguistic phenomena.

Publication
In Proceedings of the Seventh Italian Conference on Computational Linguistics (CLiC-it 2020)
Alessio Miaschi
Alessio Miaschi
PostDoc in Natural Language Processing