Cued Mandarin Transliteration

I took Mandarin on a lark in Fall 2009. I’d been fascinated with Chinese history, culture, art– everything– ever since I was little, and I’d been wanting to formally study Mandarin for a long time. In college, I finally had that chance, so I signed up for the next semester, and then had a whole bunch of meetings with my school’s student accessibility center, the Chinese instructors, and the head of the Chinese language department.

The long and short of it was that we all agreed: why not give it a try and see what happens?

I ended up studying Mandarin for four years and minoring in it.

Of course I went with cued language transliterators. My transliterator, Rosie, didn’t know a word of Mandarin, and never really got beyond the first class’s vocabulary, to my knowledge. She didn’t need to. I lucked out a bit: Mandarin is phonemically pretty finite, and it doesn’t have insane vowel/consonant combinations like we do with English. Much of the language’s semantic variation comes from the tones and syllabic pairings. I’m sure that explanation makes professional linguists want to stab me, but it’s close enough for our purposes.

In addition to that, pinyin (the most commonly-used system of romanizing Chinese characters and pronunciation) is very, very consistent. For example, “z” is always pronounced “dz”; “c” is “ts,” and “o” is always “oh” (unless it was the final after b, f, m, or w, in which it’d carry a weird “uoh” sound before, like “wuooah” or “froouaaah”), and “a” is a hard “a.” This, incidentally, may explain why I think I “see” a bit of a British accent when I lipread Chinese ESL speakers.

The Cued English system ended up being a great fit, actually, even though it isn’t designed to show tones. I’m sure it would have been even better with Cued Mandarin, but we didn’t have the resources nor the time to take it up– plus I’m not aware of any real-life examples with Cued Mandarin that we could have learned from (it’s one thing to develop a Cued language system; it’s another to put it into practice so you can refine it. Cornett spent at least a year or two on the system, I believe).

More than once, someone in class would make a joke in Mandarin and I’d laugh with the others because I understood what they were referring to; it just went over my transliterator’s head. On the few occasions that Rosie couldn’t make it, we’d have someone else substitute– always someone with no prior knowledge of Mandarin. They were usually nervous as hell about cueing everything right– but they did! I’d tell them, “Yes, that’s the right way to cue it. I understand it. You’re doing good; just keep cueing what you hear and I’ll get it.”

In one case, I had a woman who’d learned Cued Speech to communicate with a childhood deaf friend in DC, and had no formal transliterator training. She got it down too, albeit slower.

To me, that’s one of the most amazing things about Cued Speech: the transliterator doesn’t even need to know the language; she just needs to cue what she hears in order to give her deaf client full visual access.

Cueing Pronunciation

A while ago, I had an absolutely fascinating Skype chat with Thomas Shull, a speech language pathologist and cued language transliterator from the East Coast. He runs DailyCues, which is a very comprehensive resource for learning Cued Speech.

I’d contacted Thomas because I’d been taking speech therapy since last fall, and a big problem I have is rhythm. I tend to talk pretty choppy and monotone– not natural at all. In our Skype chat, Thomas noted that deaf cuers’ speech patterns tended to match how their parents or transliterators cue. Being new cuers, the parents or transliterators often fell into the habit of cueing each word individually.

The thing is, that’s not how people naturally talk, especially when you throw into schwas. People tend to chunk words, or mush sounds together, especially when you have the same final and initial consonant next to each other (for example, “what do you mean” becomes “whaddya mean?”). The pronunciation also differs depending on the part of speech– another thing I’m still learning all about.

Thomas noted that when he paired deaf clients with a cued language transliterator who cued spoken language as it was pronounced in real time, and not as individually-pronounced words, these deaf clients’ speech improved measurably after about year of exposure: much more natural rhythm, better pronunciation, better stress.

I grew up with some bad speech habits that, at 25, are a bit difficult to weed out. I know I wish I’d been exposed more to Cued Speech growing up, at least from cued speech models who cued how they spoke, not just word-for-word. Much of language learning is mimicry– copying how you’ve seen others do it. That is how I retain information on correctly producing words or signs; I copy what I’ve seen/heard from native users. Learning these things as you go is a lot easier and more efficient than trying to work backwards from what you thought you already knew.

Fractured

One of the hardest things I’ve ever done was study abroad in Beijing for four months, with no accommodations for most of the semester. I’d enrolled into the full immersion track, which meant five straight days of class every week from 9 to 4, tutoring until 6, and homework until 10pm. Oh, and it was all in Mandarin; we weren’t allowed to speak in English except on weekends. The details elude me, but I remember we’d study between 20-50 vocabulary words every other day, usually in a deadline-induced panic to pass the next test.

Most of us had a meltdown at least once that semester. Mine came when I volunteered to be the class representative for our end-of-semester speech contest (seriously, I swear every Chinese course has a speech contest).

Early in the semester, I’d noticed that one other guy in the program had hearing aids, but I thought he didn’t sign. He’d seen my cochlear implant, but he thought I didn’t sign, either. We didn’t run into each other a lot since he was on the non-immersion track, which focused on non-language courses and allowed for about 500% more free time than the immersion students got, so of course they spent that time touring the city and interacting more with natives in one week than we got in an entire semester because we were holed up in our rooms doing homework.

So, we went on like that, hanging out with our own groups, not signing, until one day just before Thanksgiving. We were in the hallway together, and when he caught my eye, he tentatively moved his hands: “do you sign?” I responded, “Yes, I do!” And we made brief, hurried plans to sit together at the program’s Thanksgiving dinner just to have a conversation in ASL after nearly three months of spoken Mandarin and English.

Lest this sounds like the beginning to an epic love story: the guy was gay. Just to get that out of the way. Anyhoo, we did indeed grab a seat next to each other at the Thanksgiving dinner, and started signing while also speaking in Mandarin and English to the others, and oh my gosh. I can’t begin to describe what an absolute mindwarp that was.

Both of us had forgotten vocabulary in English and ASL. “There was the red… umm… red… oh geez, I forgot the sign for red. What’s the sign for red?!” Our grammar was all screwed up. Looking back on it, I’m amazed I maintained any semblance of coherency, shifting between three languages at the same time.

It didn’t stop there. At the end of the semester, I had a sign language interpreter and a cued language transliterator for our two-week study trip, because the other CLT broke her leg and couldn’t make it. We had several instances where I ended up translating for them (or trying to) because they didn’t know a word of Chinese beyond the basic pleasantries.

Language. It does funny things to the brain.

That Inner Voice

Every language has an underlying rhythm, a cadence that ebbs and flows. The vocabulary and basic grammar can be taught, but you’ve got to ride the current to develop a feel for it.

When I write, I have a “voice” in my head that tells me the rhythm, how it should “sound.” I’m putting all these words in quotes because I don’t really physically hear them. It’s just… flashes of words that zip across my mind, faster than I can catch them, because I’m too focused on the message to really think about each word that comes out.

I rely a lot on this “voice” when I study other languages, especially when I can mentally match it with facial expression, body language, and emotion. I’ve had it since I was little.

I have some hazy childhood memories from before I picked up Cued Speech, and while learning it at the AGBM school in Mount Prospect, Illinois. I saw things, and I pictured them, but I didn’t have words for them. I’m sure I had signs for them, but I don’t remember “seeing” print or spoken words for them like I do now.

This makes me wonder about my Deaf and CODA[*] friends, some of whom can pull out entire ASL poems and compositions at the drop of a hat. And once it’s out there, I see how everything merges. I’d wonder how the hell they thought of it, but I already know. Their inner voice is in ASL.

I did have one happy moment in an advanced ASL class on classifiers, though. Our instructor challenged us to show a meteor crashing into Earth with classifiers only. Either she picked me, or I volunteered– I don’t remember which– but either way, I went to the front of the class, held up two hands as if I were holding a ball, then jabbed my index finger into the center of that “ball” and spread my hands apart to mime an explosion. The whole thing took less than two seconds, and I honestly didn’t think twice about it; I just did what seemed most natural and effective for that particular concept. As soon as  I finished, there was a brief silence, then a light round of clapping and nodding, and I saw that familiar look on my classmates’ faces, the same one I’d had so many times. The one that said, “ah-ha! So THAT’S how you say it!”


[*] Child of Deaf Adults. I have hearing CODA friends who sign far better than I could ever hope to achieve. Yes, I will hate them forever for it.

Oh! You’re deaf? Here’s some braille.

It seems just about every other d/hh person out there has a story about clueless people offering them unnecessary “accommodations.” You know, the ones where they tell a receptionist that they’re deaf, and she hands out materials in braille, or they tell the airport staff that they’re deaf, and bam, out comes the wheelchair.

Somehow, my whole life, I’d missed out on this defining experience… until one fateful day, when I was 25.

I’d arranged to meet with some friends at a nearby Mexican restaurant, so I walked in, pointed to my ears, and said “Hi, just so you know, I’m deaf. I’m meeting friends here. Table for three, please,” while holding up three fingers. The hostess, a lady in her early twenties, went “Oh!”, held up a finger, and bounced over to a cabinet in the back. She pulled out this giant white binder, carried it back to the front desk, and flipped it open. I looked down to see rows of raised dots: braille.

Taken aback, I waved my hands and said, “Oh, no, no, I’m deaf. I just need the regular menu, please. And table for three.” Again, the “Oh!” and the finger and the bouncing back to the cabinet, whereuponwhich[*] she pulled out another Giant White Binder and flopped it open on the front desk. I looked down.

Spanish.

The woman gave me a giant white binder full of menu items in Spanish.

Quite at my wits’ end, I thanked her again, grabbed the regular menu, and repeated that I just needed a table for three. After some back-and-forth she finally led me to an empty table in the back, where I proceeded to Facebook about it to the world.

The best part? This happened in Austin, Texas.

Five minutes’ walk away from the Texas School for the Deaf.


[*] yes, whereuponwhich is a real word. Because I said so.

Letter to A Hearing Parent

Sometimes I get emails or messages from worried parents with a newly-deaf or hard of hearing child. They want to know how I’ve done with Cued Speech, cochlear implants, sign language, etc. So, I do my best to give a balanced perspective, since I understand how lacking that can be in deaf education.

More than that, their questions often carry an undercurrent of fear and uncertainty, and I don’t blame them at all. It is overwhelming. So, I try to reach down to that core, if only to tell them that it’ll be OK and things will work out. That’s a pretty high promise, but at the same time, it’s not about guarantees– I don’t think anything with kids or other human beings is ever a guarantee. It’s mostly about, hopefully, helping these parents get to a more stable place emotionally. Sometimes, I think people just need to hear “it’ll be OK,” even if it doesn’t seem true at the time.

Eventually, this letter came out. It probably won’t apply to every parent of a d/hh child out there, but it’s more or less what I want to tell many of the parents who come to me.


Dear Parent,

It is OK to be afraid. You got thrown into a world that you know nothing about.

It is OK to grieve. Even if your child never misses her hearing, you likely had to radically recalibrate your expectations, and that in itself is a loss. It’s OK to acknowledge that loss.

It is OK to feel guilty. Chances are you did not do anything to incur cosmic or genetic karma on your kid. These things happen, and we can’t always predict nor prevent them.

There is hope. I have met successful deaf and hard of hearing people from all backgrounds. Doctors, businessmen, lawyers, professors, engineers, tradesmen, scientists, service workers. They used American Sign Language, Signed English, Cued Speech, spoken language. Cochlear implants, hearing aids, nothing at all, or any combination of the above.

Some methods work better for a specific purpose than others. Some kids respond to one approach and not to another. You will need to experiment and find out what works best for your family. No matter what you pick, be consistent, and commit to it. If it doesn’t seem to be working after you’ve given it a chance for at least a few months, drop it and try something else. Don’t let anyone else make you feel guilty for doing so. Trust your gut. Trust your heart.

Your child is unique. Embrace that. Work with it. And chances are you won’t veer too far off course.

How Cued Speech Represents Spoken Language

Anyone who’s familiar with manually coded English (MCE) such as Signed English or Visual Phonics may wonder, rightly so, how Cued Speech can provide 100% access to English on the lips and hands. Fortunately, Aaron Rose of Cue Cognatio has designed and illustrated a 3-D model that shows the relationship between Cued Speech and spoken language.

Image courtesy of Aaron Rose.

Image courtesy of Aaron Rose.

Aaron explains this model as follows:

“There are three components to each ‘system’ [speech and Cued speech] for the purpose of expressing traditionally spoken language via speech and Cued Speech.

1.) Both systems use the same mouth shapes. 
2.) The hand shapes take the place of the tongue placements (place of articulation)
3.) The hand placements take the place of the voicing/air (manner of articulation). 

This is a general model and should not be used strictly for research purposes, but is intended to provide a better idea of how and why spoken language and cued language express the same linguistic information in different modes.”

TED Talk and Captioning

It’s finally been released: a TED talk on Cued Speech by Cathy Rasmussen.

Now, a fellow cuer, Benjamin Lachman, posted the video to our Facebook page and asked for some crowdsourcing on adding accurate captions. Another cuer, Aaron Rose, took him up on that request and that link up there on the amara.org website now has accurate captions– although for some reason, the direct Youtube link still transcribes Cued Speech as “cute speech” among other things (which admittedly makes me giggle).

For me, just seeing that request made me think of possibilities for captioning Cued Speech videos. See, I’ve captioned for sign language videos before, both my own and others. Captioning is not extraordinarily difficult, but it can be very time-consuming. Essentially, you’ve got to break up the caption lines and align them with the correct timestamps, and this entails a lot of right-clicking the frame and watching mouth movements to make sure you end on the right word. It’s even trickier when you have to translate the content into a different language, and a phrase in the original language doesn’t match up with the timing for the captioned language. This applies even when you’re the one who produced the content.

But with Cued Speech, I think seeing the handshapes with the mouth would help facilitate that process, especially when combined with speech recognition software that will automatically sync a pre-uploaded transcript with the correct timestamps. It would also enable other cuers to contribute captions to the video (as Aaron did) without any discrepancies in interpretation, because it’s straight-up transliteration. Not to mention, it would be excellent cue-reading practice for budding cuers.

It’s kind of exciting to think how accessible Cued Speech videos can be with the captioning process. In that kind of work, every little bit to make it easier helps.