Why are Cochlear Implants Bad? A Primer

On The Horror of Cochlear Implants, Part 2, a Facebook acquaintance commented, “There must be some culture that I’m entirely outside of. Being able to make use of a sense that you otherwise could not is a bad thing somehow? Looks like I need to rethink my glasses…”

The traditional Deaf aversion to cochlear implants is baffling to most people. They can’t imagine why anyone wouldn’t jump at the chance to hear. Thing is, I understand that viewpoint. I don’t agree with it, but I understand it. The short answer is that it’s got a lot to do with the values that Deaf culture traditionally holds, most of which was shaped by events in the 20th century. The long answer… well, let’s dig right into the beginning.

***

Deafness fundamentally shapes the way you approach the world. More so if you lost your hearing at a young age. In the absence of a sensory input, the brain and body will compensate in other ways. Not quite Daredevil-style, but deaf brains do tend to rewire for heightened visual and tactile input. There are also some real interesting questions on how our brains process language. And apparently we have better peripheral vision too, by having more neurons in our eyeballs. In other words, I guess we’re… glorified chameleons?

funny-chameleon-1254493-640x480

Kinda like this, but sexier.

That kind of thing also leaks out into how we think, talk, and behave. Instead of “I heard him say…” I’ll say, “I saw that he said…” We hug and touch more. We’re blunter, because communication is hard enough to begin with, and dancing around the topic just makes it worse. (That bluntness has gotten me into trouble more often than I care to admit, incidentally). We tend to use more animated gestures and expressions. Oh, and we gravitate to light.

Kinda like… no. Exactly like this.

When deaf and hard-of-hearing people get together– especially at residential schools for the deaf– and find these commonalities in how they live and think and get shit done, they basically create their own language and communities that don’t really factor in sound at all. Modern technology has made that even more possible: video calls, flashing alerts, text and video messaging, emails.

(Matter of fact, Nicaraguan sign language is a modern example of this phenomenon. Prior to the 1970s, Nicaragua didn’t have a deaf community, nor an unified sign language; d/hh kids grew up with mostly hearing families and home-grown signs. Then someone threw a bunch of those kids together, made it into a school, added more kids… and over time, the kids developed a pidgin/creolized mishmash of their home signs. Years after the school started, an ASL researcher found that the younger students had not only copied the older students’ creolized sign language, but also sophisticated it further. If you don’t mind the paywall, here’s a link to the study: http://www.ethnologue.com/language/ncs

However, some people disagreed on the benefits of this cultural and linguistic autonomy, at least in the US. And a lot of those people tried to steer their next generation of d/hh children toward integration into mainstream hearing society, particularly in the late 1800’s to mid-1900’s– often to poor results, both socially and academically. Turns out, educating and integrating d/hh children based on sound instead of sight is a tad counterintuitive, especially when effective hearing aids and cochlear implants aren’t a thing yet. (And I haven’t touched on the numerous attempts to “cure” hearing loss, a lot of which did more harm than help.)

The rise of oralism in the early 20th century–using spoken language to teach d/hh students– created a domino effect. Residential schools for the deaf were downsized or closed; d/hh staff lost their jobs to hearing people who couldn’t sign; d/hh students were banned from using sign language– some punished by having their knuckles rapped bloody with a ruler, or slammed into drawers; the focus on speech rehabilitation overshadowed traditional studies like math, science, and the trades– and often at the cost of language development. The end result was social, educational, and ultimately career retardation for a large segment of the signing d/hh people nationwide. (Pro tip: want to learn some really rude signs? Bring up Alexander Graham Bell with a mainstreamed Deaf person over 30.)

There are books and books of history on this, but for starters, I’d suggest A Place of Their Own: Creating the Deaf Community in America by John Vickery Van Cleve and Barry A. Crouch; and Never The Twain Shall Meet: The Communications Debate by  Richard Winefield.

Keep in mind, much of this was done in the name of “normalizing” d/hh children. Tell generations of signing d/hh people that they’re broken, threaten their nexus of social interaction and networking (i.e. residential schools), punish them for using an intuitive language, stunt their social and academic development by hyper-focusing on the one ability they collectively lack instead of their strengths… and you have the perfect recipe for resentment and a general mistrust of outsiders’ attempts to “fix” or “help” deafness.

Only in the past few decades has this trend started to reverse, particularly after the recognition of ASL as a language in the 1960s, and even then it’s often been an uphill struggle. Cue in the mass adoption of the first cochlear implant in the early 1980s– new, experimental, requiring surgery, and with a variable success rate that depended on many factors to boot– and hackles went straight back up. “Oh, great, yet another attempt to turn us into something we’re not, and you want to cut into our skulls to do it.”

The cochlear implant isn’t a cure. But it was often marketed as such to hearing parents who didn’t know better, and more often than not, these parents weren’t made aware of American Sign Language or Deaf resources as an option. People being people, the controversy quickly devolved into an “us-vs-them” mentality– not entirely without cause, given recent history. And unfortunately, a lot of misconceptions on cochlear implants from those early days still persist.

Nowadays, while the Deaf view on cochlear implants has softened to accepting implants for adults, you’ll still see some resistance when discussing implantation for children. That’s a post for another time, but essentially, it boils down to the same central issue: stripping d/hh people of their cultural identity and linguistic access. While I don’t think implantation by itself results in that— quite the opposite, actually– I do consider it wise to evaluate the motivation behind advocating cochlear implantation. Giving the kid options? Sure. Expecting it to do the work for you, or make him “just like a hearing kid”? Not so hot.

So, there you have it. By nature, Deaf people have, for the most part, learned to adapt to a world using sight and touch, to the point that for many of us, sound is just not… a thing. It’s not in our mental landscape, at all. Some people choose to add sound to their toolbox through hearing aids and/or cochlear implants. Some don’t. The key element there is choice. When someone else tries to push that choice for us, whether that’s for or against implantation, that’s where things go awry. And it’s worse when that someone is perceived as an outsider pushing to eradicate the very thing that gave birth to your cultural and linguistic identity.

Oh! You’re deaf? Here’s some braille.

It seems just about every other d/hh person out there has a story about clueless people offering them unnecessary “accommodations.” You know, the ones where they tell a receptionist that they’re deaf, and she hands out materials in braille, or they tell the airport staff that they’re deaf, and bam, out comes the wheelchair.

Somehow, my whole life, I’d missed out on this defining experience… until one fateful day, when I was 25.

I’d arranged to meet with some friends at a nearby Mexican restaurant, so I walked in, pointed to my ears, and said “Hi, just so you know, I’m deaf. I’m meeting friends here. Table for three, please,” while holding up three fingers. The hostess, a lady in her early twenties, went “Oh!”, held up a finger, and bounced over to a cabinet in the back. She pulled out this giant white binder, carried it back to the front desk, and flipped it open. I looked down to see rows of raised dots: braille.

Taken aback, I waved my hands and said, “Oh, no, no, I’m deaf. I just need the regular menu, please. And table for three.” Again, the “Oh!” and the finger and the bouncing back to the cabinet, whereuponwhich[*] she pulled out another Giant White Binder and flopped it open on the front desk. I looked down.

Spanish.

The woman gave me a giant white binder full of menu items in Spanish.

Quite at my wits’ end, I thanked her again, grabbed the regular menu, and repeated that I just needed a table for three. After some back-and-forth she finally led me to an empty table in the back, where I proceeded to Facebook about it to the world.

The best part? This happened in Austin, Texas.

Five minutes’ walk away from the Texas School for the Deaf.


[*] yes, whereuponwhich is a real word. Because I said so.

Talk to the Experts!

If there’s just one thing I could tell anybody trying to learn more about the myriad of issues involved in deafness, it’d be this:

If you want to learn more about Cued Speech, ask someone who uses Cued Speech. If you want to learn more about American Sign Language, ask someone who uses American Sign Language. Same for cochlear implants, hearing aids, visual phonics, whatever. And take their word for it. Don’t patronize by implying that they’re an outlier. And don’t mix ’em up– that is, don’t expect an in-depth, balanced view on Alexander Graham Bell or cochlear implants from a 70-year-old Deaf signer. Likewise, a spoken-language proponent may not be terribly knowledgeable about nor sympathetic to Deaf Culture and ASL.

This isn’t to say that you can’t share opinions and resources. But like any other community, the d/hh population has its share of controversial topics, especially regarding children. Bias is always, always a factor. So is lack of knowledge and direct experience. It’s worse if the community itself tends to be rather homogeneous. As a result, misinformation can spread quickly, with no one to correct these. And I can assure you, I’ve seen my share of these with Cued Speech, especially in deaf education.

This isn’t necessarily deliberate, by the way. In my experience, most educational professionals are simply not aware of Cued Speech. If they are, they fall into four broad categories:

1) They don’t know of anyone who uses it and/or have not seen the research, so they may assume that it doesn’t work.

2) They think it’s another variant of Visual Phonics and/or may not see it as a viable communication option.

3) They don’t see the need for it, citing that they use Signed English or a Bi-Bi approach with ASL.

4) They are open to it, but don’t know of any local resources nor demand for it.

Likewise, most d/hh people don’t use or see Cued Speech in action, although most people I meet are very accepting of the fact that I use it, and many are curious about how it works. But for the most part, they don’t know anything beyond what I showed them. Often, a good portion of our initial conversation is debunking misconceptions about Cued Speech.

As for those who had experience with it, including me, most of the feedback has been very positive. I did meet a few who had tried Cued Speech and decided it didn’t work for them, either because of resources or because they just didn’t ‘click’ with it. And that’s fair; everyone is different. The key here is that they tried it out for themselves, and formulated their opinions based on what they had personally encountered. More than that, these people could share the nuances that factored into their situation: a strong family network, mental and physical health, finances, access to resources, etc.

This, by the way, applies to anything in the deaf and hard-of-hearing community. Take any second-hand experience with a grain of salt.

Cued Speech is not Visual Phonics

DISCLAIMER: In describing Visual Phonics, I’m going off what I’ve seen from those who have used it or learned it, including videos on Youtube. If my information here is incorrect, please let me know in the comments. Also bear in mind that this is not intended to be a value judgment; it’s simply my attempt at explaining how Visual Phonics and Cued Speech differ based on what I know. 

Like Cued Speech, Visual Phonics has a cue for each phoneme in the English language, and they are based on pronunciation, not orthography. Its aim is to give visual access to the phonemes of the English language. With this definition, you can see why Cued Speech and Visual Phonics are often lumped together in the same category. I see some key differences, though, and a lot of it has to do with how Cued Speech was designed for maximum efficiency in movement.

If you’re not familiar with Cued Speech, trot on over to this chart and note the handshapes and placements:

http://www.cuedspeech.org/pdfs/guide/Cue-Chart-American-English.pdf

So, with that in mind, here are the differences I see:

  • Handshapes. 
    VP: 46 cues, one for each phoneme. These cues are akin to very distinct gestures or individual “signs.”
    CS: 8 handshapes, with 3-4 consonants per handshape. These handshapes are held flat and are differentiated by finger extensions.
  • Movement.
    VP: These cues seem to imitate the movement or rhythm of the phoneme (for example, long vowel sounds versus short ones). I don’t know for sure if this design is deliberate, but I do notice the correlation.
    CS: Four locations around the face– chin, cheek, side, throat. Like the handshapes, 3-4 vowels are assigned to each location.
  • Communication.
    VP: I don’t know of Visual Phonics being used as anything other than a speech support. If anyone does, let me know in the comments. It does look like it would be cumbersome to use in real-time communication, but again, let me know if I’m mistaken.
    CS: Can be used as real-time communication or speech support. Cued Speech was designed for effective movement and least strain on the wrist and fingers. Cornett chose the handshapes he did for a reason; you can move easily from one handshape to the other, and from one placement to another, whilst matching speaking speed.

This is what I see right off the bat. If you’ve anything to add, there’s the comment button below.