Quantifiers in natural language optimize the simplicity/informativeness trade-of
Abstract
While the languages of the world vary greatly, linguists have discovered many restrictions on possible variation. Semantic universals are restrictions on the range of variation in meaning across languages. Recently, in several domains—e.g. kinship terms, color terms—such universals have been argued to arise from a trade-off between simplicity and informativeness. In this paper, we apply this method to a prominent domain of functions words, showing that the quantifiers in natural language also appear to be optimized for this trade-off. We do this by using an evolutionary algorithm to estimate the optimal languages, systematically manipulating the degree of naturalness of languages, and showing that languages become closer to optimal as they become more natural. Our results suggest that very general communicative and cognitive pressures may shape the lexica of natural languages across both content and function words.
