Signing Impaired: the Double Standard

I’ve posted about the use of “hearing impaired” and how it doesn’t bother me, though I do take care not to use it because many of my d/hh friends find it offensive. In the Deaf community, though, I’ve occasionally come across attempts to turn the tables by using the term “signing impaired” to refer to hearing people.

Perhaps ironically, even though “hearing impaired” doesn’t bother me, “signing impaired” has never felt right to me. Sometimes it’s used in the d/hh community as a joke, sometimes as a pejorative. Either way, it’s never made much sense to me. Here’s why:

  1. It comes off as hypocritical. You don’t like it when people use the term hearing impaired, so in turn, you use “signing impaired” to… I don’t know, teach them a lesson? What lesson, exactly?
  2. The way most people use the term “hearing impaired,” they’re just referring to your level of hearing. Despite its overt focus on hearing, it’s not intended to diminish you as a person. “Signing impaired,” though, definitely carries an insulting connotation– in my experience, it is usually intended as such. See #1 for my confusion on what it’s supposed to accomplish, exactly.
  3. It doesn’t mean anything. Ears are designed to hear. That’s what they’re for. If they don’t hear, then they are nothing more than funny-looking flaps of skin on your head. That’s what “hearing impaired” refers to. Hands… well, your hands work fine whether you use sign language or not. “Signing impaired” is more about language proficiency, not physical ability; and that makes about as much sense as calling a Chinese native “English impaired.”

In the grand scheme of things, it’s pretty minor. On the other hand, words have power, and the little things add up into big things  . If we, as a community, want courtesy and respect from others, we need to model it in turn.

In A Perfect World

Some weeks ago, I watched this clip from ABC Family’s Switched at Birth:

Then, yesterday, I saw a Facebook video of a deeply upset Deaf man in a hospital waiting room as one of the nurses held up an iPhone with a VR interpreter on the line:

Transcript, with original commenter’s permission to copy-and-paste:

Nurse: ..can’t be in the video.
VRI (assumingly SIM-COMing): Hello, my name is Kathleen; I’ll be your ASL interpreter #664403
Nurse: I can’t be in the video.
Interpreter: And I’ll interpret everything you say and keep… (not audible due to nurse speaking)
Nurse: You’ll have to delete it if I am, it’s-it’s against the law.
Nurse: Can you see her?
Interpreter (voicing for Deaf patient): Yes. The internet is not .. is very, very slow. I can see the interpreter; it’s very, very slow. It’s not a valid fair communication because the internet is not working.
Nurse: Ok well then you’ll have to tell him our director is coming. You’re our only option. I don’t know what else to do, this is the best we have at the moment.
Interpreter: Ok, um, just for your information, I don’t.. I don’t need to see you. If you could just keep the video camera on the Deaf patient, that would be great. … I only need to hear you, can you say that again please?
Nurse: I just said that this is the only option that we have until our- my director gets to the facility. I understand that the internet is slow, but this is the only thing that I have at the moment.
Interpreter (voicing for Deaf patient): It’s not the only option, period. It’s not the only option. VRI is not the only option because the internet is not working. It’s not the only option. We need to get (unintelligible) here now.
Nurse: OK we don’t have one.
Interpreter: This is the interpreter speaking, could you put the camera lens down on the patient and keep it there. Because right now it’s going all over the place so I cannot see his hands. Could you please stabilize the camera and keep it there. Make it stop moving. (Voicing for the Deaf patient: ) See this is an example: the interpreter can’t see because the camera keeps moving around and around and around. And so do you think this is fair communication?
Nurse: Yes ma’am I understand that.
Interpreter: What’s your name?
Nurse: My name is Tanisha.
Interpreter: Could you spell that for the interpreter please?
Tanisha: T-A-N-I-S-H-A
Interpreter: Tanisha, T-A-N-I-S-H-A, Tanisha.
Tanisha: Yes ma’am.
Interpreter: And what’s your last name Tanisha?
Tanisha: Akins.
Interpreter: Could you spell that for the interpreter please?
Tanisha: A-K-I-N-S
Interpreter: I’ve got K as in Kathleen, I, N-S. Is that N as in Nancy or M as in Mother?
Tanisha: N as in Nancy.
Interpreter (voicing for Deaf patient): So,.. can we go ahead with this appointment? (unintelligible – “I’m fed up with this?”)
Tanisha: If, if that’s what he, I mean if he’s okay with this, until we can get someone here. I-I’m just doing what my director told me.
Interpreter: Can we still get an interpreter to be on the way and use VRI to communicate until the interpreter er.. uh.. until the live interpreter gets here..
Tanisha: Exactly.
Interpreter: It’s already been an hour and a half. So… I’d like to, I would like a live interpreter to come here and uh.. and we’re waiting for an hour and a half. Can you call and request a live interpreter. We need an in-person interpreter.
Other Nurse: [unintelligible]…you can’t be recording.
Cameraperson(?): We are.
Tanisha: You’re going to have to delete that.
Other Nurse: I’m going have to del-
[END OF VIDEO]

Ech.

I have such mixed reactions to these clips.

Legally, both the Deaf man and Daphne from Switched at Birth are in the right. The ADA does mandate accommodations, and there’s probably no other scenario more important than when your life or health is at stake.

Pragmatically… they didn’t really help the hospital find any workable solutions. Honestly, I felt sorry for the nurses in the second video, who genuinely seemed to be doing the best they could with what they had at the time.

But d/hh people shouldn’t have to deal with this!  

You’re right. They shouldn’t have to. In a perfect world, they wouldn’t. But this isn’t a perfect world: internet connections fail; on-site interpreters aren’t available, or take a long time to arrive; the call comes in at 2am; hospitals struggle with budget and staffing limitations.

In all the times that I’ve wound up in the hospital– fortunately none of which were immediately life-threatening– I had an interpreter maybe twice. The on-site one took about an hour to arrive. I didn’t request one; the hospital took the initiative and called one in when I showed up. That is how long it took for him to come in at short notice. The other one was a VRI on an iPad, and that one actually worked really well.

The other times, I either talked directly with the doctors/nurses, or we wrote back and forth. Because English is my native language, I didn’t have any language barriers with the written communication; it just took more time. It wasn’t ideal, but like the doctors and nurses there, I was doing the best with what I had. We all were.

That’s why reactions like these bother me. Sure, they rile people up, but they don’t really change anything about the financial or logistical challenges that institutions face in providing accommodations for deaf and hard of hearing clients, especially last-minute. It’s all complaint and no solution.

Worse, they reinforce some harmful perceptions. First, many d/hh people prefer accommodations other than sign language interpretation. Case in point: I am OK with sign language interpretation for basic conversations, but for in-depth medical explanations, I really prefer exact transcription (and Signed English usually doesn’t really cut it). Moreover, I really don’t want to have to wait an hour for an on-site interpreter to show up when we can have the whole business done in fifteen minutes of writing back and forth.

Second, these reactions can portray a very simplistic, adversarial stereotype of d/hh people as angry, demanding clients– and with the power of the ADA behind them to boot. Yeah, it’ll probably get you accommodations, eventually, but at what price– especially when dealing with institutions that are already overstretched and under-budgeted?

Fundamentally, communication is a two-way street. And no matter which route you go, it’s going to take time, effort, and patience– for everyone involved. This much I can guarantee you, though: being argumentative, confrontational, or otherwise difficult isn’t going to get you a solution any faster– and in some cases, can even impede your communication access.

Cueing Expressively as a Receptive Cuer

One thing about being the only cuer in the entire state: you get really, really good at cuereading. If you have only a few transliterators (or only one!), sometimes you get really, really good at reading their particular style of cueing. When I reconnect with other cuers in Illinois and Colorado, it takes me a while to adjust myself to reading their cues– partly because I see them only once, maybe twice a year. I don’t have that issue with my transliterators in Wisconsin.

Conversely, the transliterator gets used to your voice so you find that you don’t need to cue as much with them, or you don’t need to cue as accurately with them. As a matter of fact, I know many cuers who just voice for themselves without any cues whatsoever. I don’t know the ratio of cuers who cue expressively versus those who don’t, but I’ve seen more in the latter category. My guess is that for the majority of d/hh cuers, it’s just easier to drop the hands and talk.

The downside is that, well, these cuers don’t get to practice expressive cueing a lot, so either they can’t do it, or they do it sloppily. I was/am in the latter category, although I have been much more mindful of it over the past few years. By cueing sloppily, I mean we drop certain handshapes, or don’t put our hands in the right position (e.g., placing the hand on the cheek instead of at the corner of the mouth for “ee” and “ur” sounds). It usually doesn’t impact our overall comprehension, I think, but it’s not technically the correct way to cue.

I do suspect that part of it is probably just cuers co-opting the system to their own style, like how native signers or native speakers become a bit sloppier in everyday conversation. Part of it is due to cuers not getting enough exposure to correct cueing, and/or not being around other cuers. I imagine as cueing becomes more mainstream, and hopefully as we establish a stronger base of cued speech transliterators, we’ll have more good models to work off of. For now, this is a good issue to be aware of, especially with young d/hh cuers.

Hearing < Communication

The second time I went to China on a youth mission trip at a rural middle school near Xi’an, a woman from another American group came up to me in the cafeteria and asked me if she could pray for my hearing. Being used to requests like these, I said, “Sure, but can you pray for better communication instead? That’d help a lot more than just being able to hear.”

She repeated, “OK, I’ll pray for your hearing then.”

“Um. OK.”

And there went the most awkward praying-over session I’ve had thus far. Now, I don’t mind when people ask to pray for me. They mean well, and if my hearing is somehow miraculously restored someday, then sure, I’ll take it. This time, though, what bothered me was that despite what I’d told her about communication, she still fixated on my deafness.

Funny thing is, this was an American woman in the middle of China who spoke no Chinese whatsoever. Like nearly everyone else on that trip, she could hear perfectly, but relied on Chinese translators for communication (and even then, sometimes it got tricky because of their local accents). On the other hand, I probably knew more Chinese than 90% of the people on that trip– granted, most of it not being particularly helpful for talking with middle-schoolers, since our vocabulary in 2nd year Mandarin was largely limited to food, school, transportation, and setting up dates.

I think too many people, especially in religious circles, miss the larger picture when it comes to hearing. Yeah, being deaf in and of itself isn’t always a picnic, but what really inhibits us is that lack of communication. Hearing alone won’t fix that: communication’s a two-way street, and it takes effort and a shared language.

That Inner Voice

Every language has an underlying rhythm, a cadence that ebbs and flows. The vocabulary and basic grammar can be taught, but you’ve got to ride the current to develop a feel for it.

When I write, I have a “voice” in my head that tells me the rhythm, how it should “sound.” I’m putting all these words in quotes because I don’t really physically hear them. It’s just… flashes of words that zip across my mind, faster than I can catch them, because I’m too focused on the message to really think about each word that comes out.

I rely a lot on this “voice” when I study other languages, especially when I can mentally match it with facial expression, body language, and emotion. I’ve had it since I was little.

I have some hazy childhood memories from before I picked up Cued Speech, and while learning it at the AGBM school in Mount Prospect, Illinois. I saw things, and I pictured them, but I didn’t have words for them. I’m sure I had signs for them, but I don’t remember “seeing” print or spoken words for them like I do now.

This makes me wonder about my Deaf and CODA[*] friends, some of whom can pull out entire ASL poems and compositions at the drop of a hat. And once it’s out there, I see how everything merges. I’d wonder how the hell they thought of it, but I already know. Their inner voice is in ASL.

I did have one happy moment in an advanced ASL class on classifiers, though. Our instructor challenged us to show a meteor crashing into Earth with classifiers only. Either she picked me, or I volunteered– I don’t remember which– but either way, I went to the front of the class, held up two hands as if I were holding a ball, then jabbed my index finger into the center of that “ball” and spread my hands apart to mime an explosion. The whole thing took less than two seconds, and I honestly didn’t think twice about it; I just did what seemed most natural and effective for that particular concept. As soon as  I finished, there was a brief silence, then a light round of clapping and nodding, and I saw that familiar look on my classmates’ faces, the same one I’d had so many times. The one that said, “ah-ha! So THAT’S how you say it!”


[*] Child of Deaf Adults. I have hearing CODA friends who sign far better than I could ever hope to achieve. Yes, I will hate them forever for it.

Oh! You’re deaf? Here’s some braille.

It seems just about every other d/hh person out there has a story about clueless people offering them unnecessary “accommodations.” You know, the ones where they tell a receptionist that they’re deaf, and she hands out materials in braille, or they tell the airport staff that they’re deaf, and bam, out comes the wheelchair.

Somehow, my whole life, I’d missed out on this defining experience… until one fateful day, when I was 25.

I’d arranged to meet with some friends at a nearby Mexican restaurant, so I walked in, pointed to my ears, and said “Hi, just so you know, I’m deaf. I’m meeting friends here. Table for three, please,” while holding up three fingers. The hostess, a lady in her early twenties, went “Oh!”, held up a finger, and bounced over to a cabinet in the back. She pulled out this giant white binder, carried it back to the front desk, and flipped it open. I looked down to see rows of raised dots: braille.

Taken aback, I waved my hands and said, “Oh, no, no, I’m deaf. I just need the regular menu, please. And table for three.” Again, the “Oh!” and the finger and the bouncing back to the cabinet, whereuponwhich[*] she pulled out another Giant White Binder and flopped it open on the front desk. I looked down.

Spanish.

The woman gave me a giant white binder full of menu items in Spanish.

Quite at my wits’ end, I thanked her again, grabbed the regular menu, and repeated that I just needed a table for three. After some back-and-forth she finally led me to an empty table in the back, where I proceeded to Facebook about it to the world.

The best part? This happened in Austin, Texas.

Five minutes’ walk away from the Texas School for the Deaf.


[*] yes, whereuponwhich is a real word. Because I said so.

How Cued Speech Represents Spoken Language

Anyone who’s familiar with manually coded English (MCE) such as Signed English or Visual Phonics may wonder, rightly so, how Cued Speech can provide 100% access to English on the lips and hands. Fortunately, Aaron Rose of Cue Cognatio has designed and illustrated a 3-D model that shows the relationship between Cued Speech and spoken language.

Image courtesy of Aaron Rose.

Image courtesy of Aaron Rose.

Aaron explains this model as follows:

“There are three components to each ‘system’ [speech and Cued speech] for the purpose of expressing traditionally spoken language via speech and Cued Speech.

1.) Both systems use the same mouth shapes. 
2.) The hand shapes take the place of the tongue placements (place of articulation)
3.) The hand placements take the place of the voicing/air (manner of articulation). 

This is a general model and should not be used strictly for research purposes, but is intended to provide a better idea of how and why spoken language and cued language express the same linguistic information in different modes.”

The Bilingual-Bicultural Dilemma

I’ve studied at least five languages. I majored in English, and minored in American Sign Language and Mandarin, including a four-month study abroad in Beijing. In high school, I dabbled in a semester or two of Latin and Spanish. (I highly recommend Latin as a starter language, by the way; it’s an incredibly useful key for any Romance language.)

The one constant in all my language studies was that at some point, you must immerse. Bar none, that’s the best way to improve your proficiency. Even my ASL instructors stressed this, and mandated that we had to attend at least one Deaf event per semester.

Yet, the one glaring exception seems to be deaf children learning English. Most bilingual-bicultural (Bi-Bi) programs I’ve seen address this by establishing ASL as a base language, and teaching all or most classes– including reading and writing– in ASL with written support.

There is some truth to this. Even with hearing aids and cochlear implants, deaf children don’t have the same access to spoken language that hearing children do. The bulk of our language proficiency comes through incidental learning, and for most people, it’s via auditory means. For deaf children, though, their primary mode is usually visual.

Hence, establishing English proficiency for deaf children is a toss-up between two general routes: either some variant of Signed English, which is much more faithful to English structuring, but tends to be functionally less complete as a language support; or American Sign Language, which is a complete language in and of itself, and as a result does not follow English structure.

The paramount objective is to establish a complete first language, ideally from fluent speakers. It’s much easier to pick up on other languages when you have a solid foundation in a base language. However, multilingual speakers will also tell you that the best way to increase your proficiency is full immersion– not just reading and writing, but also daily conversation with other native speakers. You can go only so far in studying a second language through your first language before you hit a roadblock. While proficiency is still very much doable– I’ve seen it several times, especially among prolific readers– it does get much harder. In my experience, you have to reverse-engineer. A lot.

How, then, do you reconcile these two paradigms in deaf education? By now, you know my answer is Cued Speech. It’s an 100% visual mode of communication that accurately represents spoken language in real-time, so hearing parents can act as complete language models for their deaf children without butchering ASL to fit English structure. And on the flip side, deaf children can attain full immersion in English, whether that is their L1 or L2+ language.

I’ve stated several times that Cued Speech would be the perfect addition to any Bi-Bi program. ASL would stay ASL, and English would stay English, and students would get the benefit of learning how to think in not only two languages, but also two different modalities.

“Cued Speech is just a tool.”

And sometimes that’s followed up with “…not a communication method.”

Well, first off, I’m a native cuer. I can cue anything to another cuer, and he’ll understand everything I say, and vice versa. It doesn’t matter if we voice or not; all the phonetic components of English are right there on our lips and hands. That is communication! It’s complete language access.

If you want to get picky about it, everything is a tool– i.e., a way to accomplish a particular end. Even sign language is a tool. Spoken language is a tool. Written language is a tool. They’re all ways of communicating. Cued Speech is an exact representation of an existing language.

The nice thing about Cued Speech is that it can be used by itself, voiced or unvoiced, alongside sign language, as a speech therapy support, as reading/vocabulary support, with d/hh kids, with autistic or learning-disabled kids, with ESL speakers…

The key word there is “can.” Its use is ultimately up to whoever uses it. Really, the fact that Cued Speech is a tool is probably its greatest strength: it can fit into a variety of approaches without detracting from their central philosophies.