computational models of linguistic evolution; algorithmic texts; search engine optimization; autocompletion services; optimal experiment design
Vincent Buntinx Frédéric Kaplan and Aris Xanthos (2017), Analyse multi-échelle de n-grammes sur 200 années d'archives de presse
Vincent Buntinx Cyril Bornet and Frédéric Kaplan (2017), Studying Linguistic Changes over 200 Years of Newspapers through Resilient Words Analysis, in Frontiers in Digital Humanities
Yannick Rochat et al. (2016), Navigating through 200 years of historical newspapers, in Long paper
, IPRES 2016, Suisse.
Vincent Buntinx Cyril Bornet Frédéric Kaplan (2016), Studying linguistic changes on 200 years of newspapers, in Poster
, DH2016, Poland.
Vincent Buntinx Frédéric Kaplan (2015), Inversed N-gram viewer: Searching the space of word temporal profiles, in Long paper
Frédéric Kaplan et Dana Kianfar (2015), Google et l'impérialisme linguistique: Il pleut des chats et des chiens
Vincent Buntinx and Frédéric Kaplan, Negentropic linguistic evolution: A comparison of seven languages, in Long paper
This project investigates the recent role of algorithms in the evolution of natural languages on the internet. A large variety of algorithmic processes operate as intermediary in textual chains, transforming texts into other texts. Other algorithms mediate our textual expression, for instance through auto-completion and suggestion services. Some algorithms like automatic translators, text summarizing techniques, text spinning services, or other pattern-based generative writing algorithms produce texts of their own. The rapid development of these algorithms cannot be understood independently from the economic context in which they operate. Auto-completion and suggestion algorithms can transform misspelled queries into “correct” ones on which bids can be placed and for which ads can be displayed. Many text producing algorithms optimise page ranks for search engine with the objective of bringing more traffic to the corresponding sites. This project intends to progress in the development of methods and tools for monitoring this evolution, distinguishing algorithmic texts from texts produced by humans, and building computational models giving way to new hypotheses in order to understand this global linguistic evolution.