Cued Speech is not Visual Phonics

DISCLAIMER: In describing Visual Phonics, I’m going off what I’ve seen from those who have used it or learned it, including videos on Youtube. If my information here is incorrect, please let me know in the comments. Also bear in mind that this is not intended to be a value judgment; it’s simply my attempt at explaining how Visual Phonics and Cued Speech differ based on what I know. 

Like Cued Speech, Visual Phonics has a cue for each phoneme in the English language, and they are based on pronunciation, not orthography. Its aim is to give visual access to the phonemes of the English language. With this definition, you can see why Cued Speech and Visual Phonics are often lumped together in the same category. I see some key differences, though, and a lot of it has to do with how Cued Speech was designed for maximum efficiency in movement.

If you’re not familiar with Cued Speech, trot on over to this chart and note the handshapes and placements:

http://www.cuedspeech.org/pdfs/guide/Cue-Chart-American-English.pdf

So, with that in mind, here are the differences I see:

  • Handshapes. 
    VP: 46 cues, one for each phoneme. These cues are akin to very distinct gestures or individual “signs.”
    CS: 8 handshapes, with 3-4 consonants per handshape. These handshapes are held flat and are differentiated by finger extensions.
  • Movement.
    VP: These cues seem to imitate the movement or rhythm of the phoneme (for example, long vowel sounds versus short ones). I don’t know for sure if this design is deliberate, but I do notice the correlation.
    CS: Four locations around the face– chin, cheek, side, throat. Like the handshapes, 3-4 vowels are assigned to each location.
  • Communication.
    VP: I don’t know of Visual Phonics being used as anything other than a speech support. If anyone does, let me know in the comments. It does look like it would be cumbersome to use in real-time communication, but again, let me know if I’m mistaken.
    CS: Can be used as real-time communication or speech support. Cued Speech was designed for effective movement and least strain on the wrist and fingers. Cornett chose the handshapes he did for a reason; you can move easily from one handshape to the other, and from one placement to another, whilst matching speaking speed.

This is what I see right off the bat. If you’ve anything to add, there’s the comment button below.

“Cued Speech is just a tool.”

And sometimes that’s followed up with “…not a communication method.”

Well, first off, I’m a native cuer. I can cue anything to another cuer, and he’ll understand everything I say, and vice versa. It doesn’t matter if we voice or not; all the phonetic components of English are right there on our lips and hands. That is communication! It’s complete language access.

If you want to get picky about it, everything is a tool– i.e., a way to accomplish a particular end. Even sign language is a tool. Spoken language is a tool. Written language is a tool. They’re all ways of communicating. Cued Speech is an exact representation of an existing language.

The nice thing about Cued Speech is that it can be used by itself, voiced or unvoiced, alongside sign language, as a speech therapy support, as reading/vocabulary support, with d/hh kids, with autistic or learning-disabled kids, with ESL speakers…

The key word there is “can.” Its use is ultimately up to whoever uses it. Really, the fact that Cued Speech is a tool is probably its greatest strength: it can fit into a variety of approaches without detracting from their central philosophies.