Inbal Arnon
The Learnability Consequences and Sources of Zipfian Distributions in Language
Conferencista
-
Inbal Arnon
Inbal Arnon
Prof. Arnon is a linguist and developmental psycholinguist. Her main interests are first language acquisition, learning theory, psycholinguistics, and the way cognitive biases impact language emergence and structure. Her research lies at the intersection of Linguistics, Psychology, and Cognitive Science and uses a variety of experimental methods to explore how language is learned and how learning changes as a function of prior knowledge and experience.
Moderador(a)
-
Simon Kirby
Simon Kirby
I am Professor of Language Evolution at the University of Edinburgh and elected Fellow of the British Academy, Royal Society of Edinburgh, Cognitive Science Society, and a member of the Academy of Europe. I work in parallel on scientific and artistic investigations of cultural evolution and the origins of human uniqueness, particularly the evolution of language. I founded the Centre for Language Evolution, which has pioneered techniques for growing languages in the experiment lab and exploring language evolution using computer simulations. My artistic work includes Cybraphon, which won a BAFTA in 2009 and is now part of the permanent collection of the National Museum of Scotland.
Resumo →
Inbal Arnon
The Learnability Consequences and Sources of Zipfian Distributions in Language
While the world’s languages differ in many respects, they share certain commonalities: these can provide crucial insight on our shared cognition and how it impacts language structure. In this project, we explore the learnability sources and consequences of one of the most striking commonalities across languages: the way word frequencies are distributed. Across languages, words follows a Zipfian distribution, showing a power law relation between a words’ frequency and its’ rank. Intuitively, this reflects the fact that languages have relatively few high frequency words and many low frequency ones, and that frequency does not decrease in a linear way. The source of this distribution has been heavily-debated, with ongoing controversy about what it can tell us about language. Here, we propose that such distributions confer a learnability advantage, leading to enhanced language acquisition in children, and to the creation of a cognitive pressure to maintain similarly skewed distributions over time. In the first part, we examine the learnability consequences of Zipfian distributions. We characterize the greater predictability of words in such distributions using the information-theoretic notion of efficiency, and ask three questions: (1) Are different languages similarly predictable? (2) If so, is learning uniquely facilitated in language-like predictability in both children and adults? and (3) Is this facilitation limited to the linguistic domain? In the second part, we explore the learnability sources of Zipfian distributions to ask whether learning biases can help explain why such distributions are so common in language.