How Cued Speech Represents Spoken Language

Anyone who’s familiar with manually coded English (MCE) such as Signed English or Visual Phonics may wonder, rightly so, how Cued Speech can provide 100% access to English on the lips and hands. Fortunately, Aaron Rose of Cue Cognatio has designed and illustrated a 3-D model that shows the relationship between Cued Speech and spoken language.

Image courtesy of Aaron Rose.

Image courtesy of Aaron Rose.

Aaron explains this model as follows:

“There are three components to each ‘system’ [speech and Cued speech] for the purpose of expressing traditionally spoken language via speech and Cued Speech.

1.) Both systems use the same mouth shapes. 
2.) The hand shapes take the place of the tongue placements (place of articulation)
3.) The hand placements take the place of the voicing/air (manner of articulation). 

This is a general model and should not be used strictly for research purposes, but is intended to provide a better idea of how and why spoken language and cued language express the same linguistic information in different modes.”

Cued Speech, Cued English, or Cued Language?

These three terms tripped me up for the longest time! It doesn’t help that at the time Cornett developed Cued Speech, most professionals thought that speech was inextricably linked to language. So, Cornett called his system “Cued Speech”–even though the system doesn’t require voicing– and the name has stuck; but over time researchers developed newer, more precise terms.

Here’s the difference in a nutshell:

Cued Speech: the original name for the methodology as a whole.

Cued (American) English: the Cued adaptation for English, as used in the United States. The Cued British English variant has been adapted for different vowels according to UK pronunciation. You can also say Cued Spanish, Cued Mandarin, etc.

Cued Language: the newer name that specifically encompasses languages and de-emphasizes speech; it’s also the standard term used with disability accommodations– that is, Cued Language Transliterator, or CLT.

The Bilingual-Bicultural Dilemma

I’ve studied at least five languages. I majored in English, and minored in American Sign Language and Mandarin, including a four-month study abroad in Beijing. In high school, I dabbled in a semester or two of Latin and Spanish. (I highly recommend Latin as a starter language, by the way; it’s an incredibly useful key for any Romance language.)

The one constant in all my language studies was that at some point, you must immerse. Bar none, that’s the best way to improve your proficiency. Even my ASL instructors stressed this, and mandated that we had to attend at least one Deaf event per semester.

Yet, the one glaring exception seems to be deaf children learning English. Most bilingual-bicultural (Bi-Bi) programs I’ve seen address this by establishing ASL as a base language, and teaching all or most classes– including reading and writing– in ASL with written support.

There is some truth to this. Even with hearing aids and cochlear implants, deaf children don’t have the same access to spoken language that hearing children do. The bulk of our language proficiency comes through incidental learning, and for most people, it’s via auditory means. For deaf children, though, their primary mode is usually visual.

Hence, establishing English proficiency for deaf children is a toss-up between two general routes: either some variant of Signed English, which is much more faithful to English structuring, but tends to be functionally less complete as a language support; or American Sign Language, which is a complete language in and of itself, and as a result does not follow English structure.

The paramount objective is to establish a complete first language, ideally from fluent speakers. It’s much easier to pick up on other languages when you have a solid foundation in a base language. However, multilingual speakers will also tell you that the best way to increase your proficiency is full immersion– not just reading and writing, but also daily conversation with other native speakers. You can go only so far in studying a second language through your first language before you hit a roadblock. While proficiency is still very much doable– I’ve seen it several times, especially among prolific readers– it does get much harder. In my experience, you have to reverse-engineer. A lot.

How, then, do you reconcile these two paradigms in deaf education? By now, you know my answer is Cued Speech. It’s an 100% visual mode of communication that accurately represents spoken language in real-time, so hearing parents can act as complete language models for their deaf children without butchering ASL to fit English structure. And on the flip side, deaf children can attain full immersion in English, whether that is their L1 or L2+ language.

I’ve stated several times that Cued Speech would be the perfect addition to any Bi-Bi program. ASL would stay ASL, and English would stay English, and students would get the benefit of learning how to think in not only two languages, but also two different modalities.