Cueing Pronunciation

A while ago, I had an absolutely fascinating Skype chat with Thomas Shull, a speech language pathologist and cued language transliterator from the East Coast. He runs DailyCues, which is a very comprehensive resource for learning Cued Speech.

I’d contacted Thomas because I’d been taking speech therapy since last fall, and a big problem I have is rhythm. I tend to talk pretty choppy and monotone– not natural at all. In our Skype chat, Thomas noted that deaf cuers’ speech patterns tended to match how their parents or transliterators cue. Being new cuers, the parents or transliterators often fell into the habit of cueing each word individually.

The thing is, that’s not how people naturally talk, especially when you throw into schwas. People tend to chunk words, or mush sounds together, especially when you have the same final and initial consonant next to each other (for example, “what do you mean” becomes “whaddya mean?”). The pronunciation also differs depending on the part of speech– another thing I’m still learning all about.

Thomas noted that when he paired deaf clients with a cued language transliterator who cued spoken language as it was pronounced in real time, and not as individually-pronounced words, these deaf clients’ speech improved measurably after about year of exposure: much more natural rhythm, better pronunciation, better stress.

I grew up with some bad speech habits that, at 25, are a bit difficult to weed out. I know I wish I’d been exposed more to Cued Speech growing up, at least from cued speech models who cued how they spoke, not just word-for-word. Much of language learning is mimicry– copying how you’ve seen others do it. That is how I retain information on correctly producing words or signs; I copy what I’ve seen/heard from native users. Learning these things as you go is a lot easier and more efficient than trying to work backwards from what you thought you already knew.