Notes on Stella Palavecino’s “/læb/ bites: out of the ESL bubble and into the language lab” webinar

by Viktor Carrasquero

Computer-Assisted Pronunciation Training (CAPT) seems to be an ever expanding field. PronSIG, its journal and events, have provided a platform to showcase some of the most innovative applications of technology in the pronunciation classroom. For instance, Speak Out!, our journal, recently featured articles about barriers to operationalising CAPT (Alghazo, 2020), and models of technology integration (Cendoya & Martino, 2020). On 20th February, PronSIG invited Stella Palavecino to deliver /læb/ bites: out of the ESL bubble and into the language lab, a webinar in which she illustrated the use of Audacity®, an open-source audio editor, to provide learners with phonological awareness raising practice. In this post, I will outline some of the principles that underlie the use of technology in our classes, particularly for teaching pronunciation, paying special attention to the ideas suggested by Palavecino.

Overcoming limitations

Alghazo (2020) identified two main barriers to the use of computers: teachers and students’ lack of knowledge of computer technology, and lack of time to focus on pronunciation, in general, or to scaffold technology-mediated learning, specifically. The author concludes that it is key to provide technology training to both teachers and learners, and that this should be part of the education syllabus and programme, rather than the sole endeavour of the pronunciation teacher, who might be struggling themselves to understand the technology. I agree that training for both parties is of undeniable importance, but I would add that task design -its simplicity, clarity and self-evident purpose- is equally important: teachers and learners should not have to think hard to see the benefits of using technology, nor should they have to spend more time trying to come to grips with it, instead of focusing on the actual language points at hand.

One of the technology-integration models that Cendoya and Martino (2020) discussed was Puentedura’s SAMR Model (2010), which claims that technology can either enhance or transform instructional tasks. The first letters of the acronym stand for Substitution and Augmentation, under which technology basically substitutes a more traditional, analogue task. Using a mobile phone to play a recording repeatedly, so that learners repeat after it, is an example of this level of integration -teachers do not really need to use mobiles this way, they could just play the recording on the classroom computer or other devices. The last two letters stand for Modification and Redefinition, which purport that technology opens doors to designing tasks that were previously very difficult or just impossible to do. Such is the case of Palavecino’s webinar tips, which I present below.

The webinar

Palavecino described the many benefits of using inexpensive technology, such as the free and open-source digital audio editor and recording application software Audacity®, which allows learners to not only analyse audio extracted from different media sources, such as videos they find on the Internet, but also their own voice. The speaker explained how the software empowers learners to do the typical procedures sound editors are capable of, such as slowing down recordings, which might facilitate the decoding of sound strings that are particularly difficult for learners, segmenting a larger audio source into shorter bits (which Palavecino calls bites), amplifying the volume of the sound file and, most interestingly, creating silence gaps in audio files, for learners to insert their own recording, in an attempt to approximate a pronunciation feature which might enhance their intelligibility.

The latter use of the software allows for task redefinition, in the sense that sound editors enable learners to manipulate a sound file that they find, or their teachers give them, to listen to as many times as they need, and to edit their own version of the phrase, sentence or text next to the original audio, which learners can use to self-check their pronunciation. This practice empowers learners to become their own pronunciation models, as they strive to imitate the features of someone else’s pronunciation.

Implications: Lab ‘on the go’

Central to Palavecino’s suggestions is the idea that sound editing software, such as Audacity® and others, give learners the freedom of analysing sound anywhere they are, releasing them from the constraints of having to be present in a physical lab. Freedom from a physical lab has become increasingly important, particularly after this prolonged period of having very limited access to specialised facilities in schools and universities -brought about by the ongoing global health crisis. These practical ideas are also compatible with Neri et al. (2002) and their claim that CAPT gives opportunities for individualised, focused environments to learn, where the teacher truly facilitates learning, relinquishing total control of what learners do with sources of spoken language.

In the light of learners’ relatively easy access to these technologies, and the fact that, as Pennington and Rogerson-Revell (2019) put it, they “provide endless opportunities for repetition and imitation, instantaneous responses, and exposure to a wide range of target-language speech” (p. 235), we should keep two corollaries in sight: 1. getting our learners to listen to a pronunciation model, so that they imitate it, should only be aimed at helping them become more intelligible. We should centre accuracy work around getting learners to approximate pronunciation features as evident in samples we expose them to, and not around imitating specific accents. We want students to develop their accuracy, as they keep their identities and linguistic idiosyncrasies; and 2. while the pre-eminent pronunciation model of learners is the teacher themselves, technologies allow us to expose learners to a wide range of accents from different types of people, coming from all over the world. Exposure to the great variety of accents that exist, which constitute our pronunciation model bank, is a true asset in helping our learners develop their receptive and productive spoken language skills.

I would like to end this post thanking Stella Palavecino for delivering such a practical webinar. In giving us all tips for bringing the pronunciation lab closer to learners, giving them power to create their own models, Palavecino has tackled what Neri et al. (2002) see as a great hindrance to implementing technologies effectively: the lack of guidelines and instruction on how to best use technologies in practical terms, to meet learning requirements. For this, we are all thankful. Now, go check the webinar here, to get all the suggestions and examples in detail!

References

Alghazo, S. (2020). Computer technology in pronunciation instruction: use and perceptions. Speak Out! 63, 7-21.

Audacity Team (2020). Audacity(R): Free Audio Editor and Recorder [Computer application]. Version 2.4.2. Retrieved from https://audacityteam.org/ on 8th March 2021. 

Cendoya, A. & Martino, D. (2020). Revisiting technology mediated pronunciation teaching and learning. Speak Out! 63, 22-33.

Neri, A, Cucchiarini, C., Strik, H., & Boves, L. (2002). The pedagogy-technology interface in computer assisted pronunciation training. Computer Assisted Language Learning. 15(5), 441-7.

Pennington, M. & Rogerson-Revell, P. (2019). English pronunciation teaching and research: Contemporary perspectives. London: Palgrave. 

Puentedura, R. (2010). SAMR and TPCK: Intro to Advanced Practice. Retrieved from http://hippasus.com/resources/sweden2010/SAMR_TPCK_IntroToAdvancedPractice.pdf on 8th March 2021.