Back to overview

Overview of the 2018 Spoken CALL Shared Task

Type of publication Peer-reviewed
Publikationsform Proceedings (peer-reviewed)
Author BaurClaudia, CainesAndrew, ChuaCathy, GerlachJohanna, QianMengjie, RaynerManny, RussellMartin, StrikHelmer, WeiXizi,
Project A Crowdsourcing Platform for Spoken CALL Content
Show all

Proceedings (peer-reviewed)

Title of proceedings Proceedings of Interspeech 2018
Place Hyderabad, India

Open Access

Type of Open Access Repository (Green Open Access)


We present an overview of the second edition of the Spoken CALL Shared Task. Groups competed on a prompt-response task using English-language data collected, through an online CALL game, from Swiss German teens in their second and third years of learning English. Each item consists of a written German prompt and an audio file containing a spoken response. The task is to accept linguistically correct responses and reject linguistically incorrect ones, with "linguistically correct" defined by a gold standard derived from human annotations. Scoring was performed using a metric defined as the ratio of the relative rejection rates on incorrect and correct responses. The second edition received eighteen entries and showed very substantial improvement on the first edition; all entries were better than the best entry from the first edition, and the best score was about four times higher. We present the task, the resources, the results, a discussion of the metrics used, and an analysis of what makes items challenging. In particular, we present quantitative evidence suggesting that incorrect responses are much more difficult to process than correct responses, and that the most significant factor in making a response challenging is its distance from the closest training example.