Ching-Chong Cued Speech Chang

The Deaf community takes up arms, and rightly so, when a celebrity or comedian mimics gibberish ASL. Latest offender: Jamie Foxx on the Jimmy Fallon show. Others include Chelsea Handler, Cecily Strong, and pretty much any SNL show to do with sign language.

Now, I consider myself a hard person to offend. Gibberish ASL has made the rounds so often by now that I just consider it a cheap shot, comparable to putting on horn-rimmed glasses, fake buck teeth, and chattering out a “chinky chinky Chinaman” routine. It’s been done to death, it’s connected to negative and insulting stereotypes, and it’s nothing like the original language or culture, so it doesn’t even make enough sense to be funny.

In other words, it’s pulling random gestures out of one’s ass. It’s lazy, tacky, and trite. Hearing comedians can be bad enough about this; you’d think Deaf comedians would know better.

You’d think. If you don’t have three minutes to spare, skip right on to 2:10.

Now, the joke itself starts out OK. The driver decides to weasel out of a speeding ticket by pretending that he knows Cued Speech– so of course, he bungles it up, thinking the cop won’t know better. The cop recognizes the driver’s attempts at Cued Speech, holds up his finger, and returns to his squad car…

…and takes out a paper with cue words printed on it, replying with his own version of cue gibberish.

OK. A few things to say here.

  1. Remember, this is at Gallaudet. The only university for the Deaf in the world, one that hosts a multitude of sign languages from all over the worlds. It is, in fact, the birthplace of Cued Speech, with a vibrant Cued Speech community in the DC, Virginia, West Virginia, and Maryland area. How hard would it have been to find someone who knew Cued Speech to play as the policeman, or even to have the policeman flag down someone who happened to know both ASL and Cued Speech?
  2. He couldn’t at least have mouthed with the cues? That’s how Cued Speech works– it clarifies lipreading. There is no Cued Speech without lipreading!
  3. What’s up with the paper? It’s not… you can’t just cue right off a sheet of paper without knowing Cued Speech already. Yes, I talk about how you can learn the system off a sheet of paper in a weekend… but that doesn’t mean you can start cueing fluently right off the bat. Again, I think the video would have worked much better if the policeman started cueing fluently, and/or called in someone who knew Cued Speech.

I don’t know if the original author intended to insult Cued Speech. I don’t think so; my impression is that Cued Speech was a handy option for tricking a policeman who most likely only knew sign language. To be honest, I was glad to see Cued Speech getting recognition at Gallaudet! Unfortunately, making up random cues, instead of taking the time to reproduce a reasonably accurate version, cheapened the humor for me.

 


 

On a more positive note, this is one of the very few sign language parody videos I actually liked. At risk of ruining the humor by overanalyzing it: first, her “signs” actually have some relation to what she’s trying to say, so part of the fun is seeing how she acts out several concepts. This requires effort and on-the-spot thinking. In fact, a lot of deaf comedy acts incorporate this element; they try to “sign” without actually signing. Second, while the video pokes fun at both the interpreter and the mayor– especially on the Spanish bit– it isn’t insulting or demeaning to the broader d/hh community (at least, I don’t think). While its execution isn’t perfect, I’d say they got the idea on this one right.

We Aren’t Outliers

“You had strong family support.”

“You went to a good school.”

“You got lots of one-on-one time, didn’t you?”

“You were exposed to other cuers.”

Sometimes, when I tell others about what Cued Speech had done for me growing up, someone will mention the above, as if those factors somehow negate or diminish Cued Speech’s efficacy. It’s like they’re implying that Cued Speech itself didn’t work, that the other factors had to compensate, or that I was the exception that proved the rule.

It’s true that family and educational support are immensely important, and often if not usually a deciding factor in a child’s success. Home and school are where the child spends most of his time. However, communication access and literacy depend highly on what the people in those environments are equipped to provide.

In a residential school, or a mainstreamed program with a strong Deaf presence, everyone is either d/hh, more visual-oriented, or have (ideally!) received training and support to meet language requirements. Staff are able to act as appropriate language models, so that ensures communication access and, to some degree, academic success.

Outside of residential schools, though, getting that access to appropriate language models can be much more challenging– not to mention the complexities of using a manual language to impart literacy in a completely separate aural language. That’s if you have access to ASL; more often, what I’ve seen is a mixture of auditory-verbal therapy and manually-coded sign systems, and the results can vary just as much from very, very good to very, very bad. In fact, many cueing parents took up Cued Speech precisely because their local programs or residential schools were not a viable option for one reason or another.

In evaluating different approaches in d/hh education, we need to look at that approach’s overall results, not just specific examples. We can’t cherry-pick outliers to prove our point. That’s probably why those statements at the beginning somewhat annoy me, because in my experience, success at attaining language and literacy through Cued Speech is the norm, not the exception.

In my experience, signing d/hh people who can write or read well tend to be in the minority. On the flip side, cueing d/hh people who have those odd grammatical or spelling flukes– not typos, but more like what you might see from ESL speakers– are the exception; the rest read, write, and talk like native hearing speakers (with varying degrees of a “deaf” voice). I’ve had more than one person tell me that they wouldn’t know I was deaf just by reading my posts.

The studies on Cued Speech that I’ve read bear this out– in fact, I haven’t yet found any studies with negative results on Cued Speech’s use. (I do recall one with “meh” results in a group of hard-of-hearing students, but that’s about it.)

I suspect that you won’t see such consistent results among deaf signers mainly due to these reasons:

  1. The learning curve involved in picking up any manually-coded or signed system, which demands greater commitment and effort from parents and teachers over the long term, so you’re much more likely to see a wider variation in usage and proficiency.
  2. The linguistic and conceptual gap between sign language and spoken language (or even just two different languages). You can patch that gap somewhat, but it’ll never replace incidental learning through full linguistic immersion (and not necessarily just reading and writing).

This isn’t to make Cued Speech out to be a magic bullet that bestows language and literacy the instant someone starts using it for their kid. What it does do is enable one to visually “recode” a language she already knows, without the delay of learning and translating through a second language. In this way, the d/hh kid is put on the same playing field as a hearing child for literacy and language acquisition, so d/hh cuers are much more likely to pick up spoken/written language at the same pace as their hearing counterparts.

How Cued Speech Represents Spoken Language

Anyone who’s familiar with manually coded English (MCE) such as Signed English or Visual Phonics may wonder, rightly so, how Cued Speech can provide 100% access to English on the lips and hands. Fortunately, Aaron Rose of Cue Cognatio has designed and illustrated a 3-D model that shows the relationship between Cued Speech and spoken language.

Image courtesy of Aaron Rose.

Image courtesy of Aaron Rose.

Aaron explains this model as follows:

“There are three components to each ‘system’ [speech and Cued speech] for the purpose of expressing traditionally spoken language via speech and Cued Speech.

1.) Both systems use the same mouth shapes. 
2.) The hand shapes take the place of the tongue placements (place of articulation)
3.) The hand placements take the place of the voicing/air (manner of articulation). 

This is a general model and should not be used strictly for research purposes, but is intended to provide a better idea of how and why spoken language and cued language express the same linguistic information in different modes.”

TED Talk and Captioning

It’s finally been released: a TED talk on Cued Speech by Cathy Rasmussen.

Now, a fellow cuer, Benjamin Lachman, posted the video to our Facebook page and asked for some crowdsourcing on adding accurate captions. Another cuer, Aaron Rose, took him up on that request and that link up there on the amara.org website now has accurate captions– although for some reason, the direct Youtube link still transcribes Cued Speech as “cute speech” among other things (which admittedly makes me giggle).

For me, just seeing that request made me think of possibilities for captioning Cued Speech videos. See, I’ve captioned for sign language videos before, both my own and others. Captioning is not extraordinarily difficult, but it can be very time-consuming. Essentially, you’ve got to break up the caption lines and align them with the correct timestamps, and this entails a lot of right-clicking the frame and watching mouth movements to make sure you end on the right word. It’s even trickier when you have to translate the content into a different language, and a phrase in the original language doesn’t match up with the timing for the captioned language. This applies even when you’re the one who produced the content.

But with Cued Speech, I think seeing the handshapes with the mouth would help facilitate that process, especially when combined with speech recognition software that will automatically sync a pre-uploaded transcript with the correct timestamps. It would also enable other cuers to contribute captions to the video (as Aaron did) without any discrepancies in interpretation, because it’s straight-up transliteration. Not to mention, it would be excellent cue-reading practice for budding cuers.

It’s kind of exciting to think how accessible Cued Speech videos can be with the captioning process. In that kind of work, every little bit to make it easier helps.

Why Not Both?

Growing up, I never really saw a conflict between sign language and Cued Speech. Even if I couldn’t quite articulate it yet at four years old, I could tell they were different and didn’t see any reason to pick one over the other. As I got older, people asked me about the difference, so I’d tell them that signs are based on words and cues are based on sounds. Sometimes they’d ask me which I liked better, and I couldn’t really answer because, well, it was like comparing apples and oranges. Later on, when I connected with other deaf adult cuers, I found that we’d often code-switch between Cued Speech and American Sign Language.

All of this, by the way, mirrors my experiences with other languages– notably, Mandarin and my 2011 study abroad in Beijing with other international students. We jumped between languages a lot, depending on what was most appropriate for the context. (One of these days, I need to post my story about having a conversation in ASL with the one other hard-of-hearing guy in the program, after a semester of full immersion in Mandarin.)

Personally, I find ASL useful for expressing emotions that may not have an appropriate English equivalent, whereas Cued English helps me articulate concepts in a precise, orderly manner. Sometimes I’ll combine the two– for example, I may use a classifier on my left hand to show spatial placement or shape while cueing a description with my right hand. That’s just me, though; others will almost certainly differ.

Some people seem to think using both will “confuse” deaf children. Thing is, I know people in Europe who grew up speaking as many as five, six different languages. Why can’t deaf kids achieve the same thing through ASL and Cued English? We’ve got reams and reams of research out there supporting bilingual education. Personally, I think Cued English would tie in perfectly with the Bilingual-Bicultural educational model in residential schools now, and I’ve spoken to several educators who feel the same way.

That said, I do understand the concern about Cued Speech taking precedence over ASL, or favoring a purely auditory-oral/”fixing deaf people” approach reminiscent of the days of Bell (as well-intentioned as he was). No matter how good our technological and educational approaches become, there will never be a one-size-fits-all solution; and we will probably always have a varying spectrum of deaf people in terms of language and speech production.

A fellow cuer, Aaron Rose, recently said of American Sign Language and Cued Speech, “You’re comparing apples and oranges, but at the same time both are used to nourish the body.” And that’s really probably the best way to look at it.

A Croaking Dalek with Laryngitis.

What’s up with the name?

Long story short, I am deaf. I got a cochlear implant when I was ten. No, my parents didn’t ask me for my input. No, I didn’t and don’t resent them for it. No, it’s not a cure, and yes, it does help.

The title comes from a late-deafened British member of Parliament, Jack Ashley, who got a cochlear implant in his 70’s. He described the sound as “a croaking dalek with laryngitis,” and the phrase stuck with me. Coming up with unique URL names is ridiculously difficult, so I’m copping this one while it lasts.

No, I haven’t seen Dr. Who yet, and yes, I plan to watch the series.

OK, so what’s up with this blog? 

Well. Most deaf kids are raised with sign language or spoken language– which are often referred to as manualism or oralism respectively (quit snickering)– or a combination of both. Now me, I grew up with Cued Speech. Because it’s not terribly commonplace, there are a lot of misconceptions out there about it, so this blog is my attempt at sorting it out.

Cued Speech? What’s that?

Cued Speech is one of those things that is just difficult to explain because nobody has a frame of reference for it; it doesn’t neatly fit into any one box. The way I try to explain it, whilst floundering all over myself (“no, it’s not sign language, yes, it uses the hands but it’s not sign language, no, it’s not visual phonics, I don’t know why they’re different, they just are, no, you don’t need to voice it, yes, it represents sound but you don’t need to SAY it…”), is this:

Cued Speech is a system of visually representing the sounds of spoken language in conjunction with lipreading. It’s got eight handshapes, with about three consonants per handshape, and four movements/placements with three or four vowels per movement/placement. They’re arranged so that sounds that look the same on the lips are assigned to different handshapes– for example, /m/ goes on handshape 5, while /b/ goes with handshape 4 and /p/ with handshape 1. As you mouth/voice the words, you put the cues together like a puzzle, and presto! Cued Speech.

There are plenty of sites out there that explain it far better than I ever could, and they have video too. The National Cued Speech Association is a good place to start: www.cuedspeech.org. CueEverything has an excellent collection of damn near every Cued Speech video out there, at www.cueeverything.com.

What I’ll be doing here is sharing my experiences and observations with Cued Speech, as well as forwarding research or news related to it, on a regular basis (approximately once a week is my plan) until… I run out of things to say, I suppose. Down the road, I’d like to publish brief vlogs in both Cued and sign language. Whichever way it goes, I’ll post all about it right here.