by Ana Paula Biazon Rocha
PronSIG’s online conference, ‘Will the AI revolution remove the need for pronunciation teachers?’, on 12 October 2024, was an excellent opportunity to explore and reflect on the impact of Artificial Intelligence (AI) on pronunciation instruction. As mentioned in our previous blog post, AI is an inescapable part of this new reality. During the conference, we considered different insights and perspectives, especially those shared in the opening and closing plenaries, led by experts in pronunciation and technology, Beata Walesiak and Shannon McCrocklin, respectively. Thus, it is worth revisiting five key takeaways.
1. ‘What would it take for AI to replace pronunciation teachers?’, Beata Walesiak
a. Common abbreviations
Walesiak started her session by introducing some common AI-related abbreviations that may not be immediately familiar to us.
Table 1: List of abbreviations – B. Walesiak’s opening plenary
b. AI variety vs. pronunciation pedagogy
Walesiak listed several AI web and phone applications that support learners with aspects of pronunciation, such as sound production, word stress, pitch, prominence, and rhythm. Most of these tools incorporate phonetic symbols and offer feedback on learners’ recordings. However, they usually lack a pedagogical approach to help students learn more effectively. In other words, most of them do not follow the Communicative Framework to teach pronunciation by Celcel-Murcia et al. (2010), which emphasises guiding learners from recognising and analysing pronunciation features to applying them in communicative practice. For more details on how to teach pronunciation communicatively, check this previous blog post.
Furthermore, most AI apps continue to rely on native-speaker pronunciation models, reinforcing an often-unrealistic native-like standard. That is why teachers should try and help students critically assess these tools, as well as understand their limitations.
c. Not replacing, augmenting
Image 1: Walesiak’s opening plenary
Finally, Walesiak reinforced that ‘AI should enhance and augment humans, not replace them’. To put it simply, AI tools should be used to support human abilities and improve tasks rather than take over entirely, allowing people to focus on more complex, creative, or interpersonal aspects that require human insight and judgement. In addition, she believes that AI can help promote a shift in pronunciation instruction, moving towards project-based and student-led learning, collaborative projects, game creation, and teaching skills for critical thinking and problem-solving.
2. ‘Surveying the gap between CAPT and STT programs: Can AI chatbots fill the void?’, Shannon McCrocklin
a. Principles of good pronunciation teaching
McCrocklin began by pointing out some key principles of good pronunciation teaching, such as:
- focus on features that affect intelligibility;
- modelling of intelligible pronunciation;
- analysis and emphasis on learners’ individual needs;
- providing explicit feedback;
- promotion of learner autonomy.
To achieve that, McCrocklin also referred to Celce-Murcia et al’s (2010) Communicative Framework, as mentioned previously.
Image 2: McCrocklin’s closing plenary
b. Pronunciation practice with AI chatbot
Although much of McCrocklin’s research has focused on Computer-Assisted Pronunciation Training (CAPT) and Speech-to-Text (STT) technologies and their impact on pronunciation learning, her latest study (McCrocklin & Colclasure, 2024) centred on Gliglish. Through this AI chatbot, students can interact with an AI teacher and role-play real-life situations (e.g., ordering at a bakery, asking for directions) to practise listening, speaking, and pronunciation skills. Her main research questions were:
- To what degree can Gliglish successfully provide responses to requests for pronunciation information and activities?
- Does Gliglish provide useful feedback on pronunciation errors?
The findings indicated that Gliglish offers pronunciation modelling, controlled production (with a focus on form), meaningful communication, and explicit feedback. It may also emphasise intelligibility and provide explanations about pronunciation features. However, an important limitation is its lack of a more structured pronunciation lesson, as per Celce-Murcia et al.’s (2010) Communicative Framework, to better support students’ learning.
McCrocklin then concluded that ‘AI chatbot helps fill some gaps, but there is more work to do’. While tools such as Gliglish can provide pronunciation modelling, feedback, and practice opportunities, they still fall short of offering a more comprehensive, structured approach necessary for more effective pronunciation learning.
As you can see, there is no need to panic yet. Pronunciation teachers will still play an essential role in students’ learning process, one way or another… or at least, we hope so!
Were you a ticket holder for our October conference on AI? Remember you can rewatch any of the conference sessions by following the recording links on our email sent out on October 14. If you can’t find this email, and it’s not in your Spam folder, get in touch with us at [email protected] and we can check this for you. Didn’t attend the conference but wish you did? You can purchase access to the recordings here.
For more on pronunciation instruction, please check our previous blog posts. Don’t forget to leave your comments below and follow PronSIG on social media.
Reference
McCrocklin, S. & Colclasure, Q. (2024). The AI Chatbot, Gliglish, and potential pronunciation learning. 15th Annual Pronunciation in Second Language Learning and Teaching Conference, Ames, IA, United States.