#3073

Paper presentation

Tackling linguistic difficulty through serious games

Sat, Jun 18, 17:00-17:30 Asia/Tokyo

Location: Zoom A

The difficulty and complexity of language features (such as orthographic forms, grammar structures, or morphosyntactic units, etc.) for second language learners has attracted considerable theoretical and empirical research attention in recent years (Palotti, 2014; Housen & Simoens, 2016; Bulté & Housen, 2018). Understanding the relative difficulty of language features is important to determine whether simple or difficult features should be the focus of instruction, and whether particular instructional approaches are more effective for simple or difficult features. Until now some of the best indicators of feature difficulty are the judgments of experts (e.g. teachers) and perceptions of students (Housen, 2014). With the rise of digital learning methods, such as digital game-based language learning tools, vast amounts of data are being generated by learners as they attempt to learn second and foreign languages - for example how many errors are made and how much time is taken as each of the thousands of players strives for language mastery. These data can be analysed statistically to determine more objectively which language features are difficult for learners to grasp.

In this study we measure feature difficulty objectively with data coming from a set of 16 minigames from the Horizon 2020 iRead project aimed at developing primary school children’s reading skills through the use of digital minigames and an ereader, both connected to teacher analytics software. Data were obtained from 744 Spanish EFL students playing the minigames a total of 67,623 times using 225 language features across six categories (orthography, phonology, word recognition, morphology, syntax, and morphosyntax). The system automatically logs each game a student plays, as well as their level of success. With the multitude of game logs, the lme4 R package was used to undertake an Item Response Theory based analysis (Kadengye et al., 2014, Debeer et al., 2021) separating student ability level, minigame difficulty, and language feature difficulty. This provides us with an objectively generated list of relative language feature difficulties for this population. Preliminary analysis suggests, for example, that on one end of the scale are adjectives, question words, and phonology of lower frequency consonants (e.g. q, x, j), while on the more difficult end are features such as anaphora, syllabification, and prefixes/suffixes.

Discussion will address the extent to which this bottom-up approach holds promise in our understanding of the difficulty of linguistic features, particularly as more and more data sets like this are generated and explored by CALL researchers.