Ching-Chong Cued Speech Chang

The Deaf community takes up arms, and rightly so, when a celebrity or comedian mimics gibberish ASL. Latest offender: Jamie Foxx on the Jimmy Fallon show. Others include Chelsea Handler, Cecily Strong, and pretty much any SNL show to do with sign language.

Now, I consider myself a hard person to offend. Gibberish ASL has made the rounds so often by now that I just consider it a cheap shot, comparable to putting on horn-rimmed glasses, fake buck teeth, and chattering out a “chinky chinky Chinaman” routine. It’s been done to death, it’s connected to negative and insulting stereotypes, and it’s nothing like the original language or culture, so it doesn’t even make enough sense to be funny.

In other words, it’s pulling random gestures out of one’s ass. It’s lazy, tacky, and trite. Hearing comedians can be bad enough about this; you’d think Deaf comedians would know better.

You’d think. If you don’t have three minutes to spare, skip right on to 2:10.

Now, the joke itself starts out OK. The driver decides to weasel out of a speeding ticket by pretending that he knows Cued Speech– so of course, he bungles it up, thinking the cop won’t know better. The cop recognizes the driver’s attempts at Cued Speech, holds up his finger, and returns to his squad car…

…and takes out a paper with cue words printed on it, replying with his own version of cue gibberish.

OK. A few things to say here.

  1. Remember, this is at Gallaudet. The only university for the Deaf in the world, one that hosts a multitude of sign languages from all over the worlds. It is, in fact, the birthplace of Cued Speech, with a vibrant Cued Speech community in the DC, Virginia, West Virginia, and Maryland area. How hard would it have been to find someone who knew Cued Speech to play as the policeman, or even to have the policeman flag down someone who happened to know both ASL and Cued Speech?
  2. He couldn’t at least have mouthed with the cues? That’s how Cued Speech works– it clarifies lipreading. There is no Cued Speech without lipreading!
  3. What’s up with the paper? It’s not… you can’t just cue right off a sheet of paper without knowing Cued Speech already. Yes, I talk about how you can learn the system off a sheet of paper in a weekend… but that doesn’t mean you can start cueing fluently right off the bat. Again, I think the video would have worked much better if the policeman started cueing fluently, and/or called in someone who knew Cued Speech.

I don’t know if the original author intended to insult Cued Speech. I don’t think so; my impression is that Cued Speech was a handy option for tricking a policeman who most likely only knew sign language. To be honest, I was glad to see Cued Speech getting recognition at Gallaudet! Unfortunately, making up random cues, instead of taking the time to reproduce a reasonably accurate version, cheapened the humor for me.

 


 

On a more positive note, this is one of the very few sign language parody videos I actually liked. At risk of ruining the humor by overanalyzing it: first, her “signs” actually have some relation to what she’s trying to say, so part of the fun is seeing how she acts out several concepts. This requires effort and on-the-spot thinking. In fact, a lot of deaf comedy acts incorporate this element; they try to “sign” without actually signing. Second, while the video pokes fun at both the interpreter and the mayor– especially on the Spanish bit– it isn’t insulting or demeaning to the broader d/hh community (at least, I don’t think). While its execution isn’t perfect, I’d say they got the idea on this one right.

Hearing Loss: A Visual Comparison

Most people unfamiliar with hearing loss seem to visualize it as a continuum. Either you hear “more,” or you hear “less,” and all you need to do is turn up the sound.

Not quite. Hearing loss tends to be more patchy. For example, higher-frequency sounds are usually the first to go, particularly with age-related hearing loss. This usually means the person will miss out on sounds like and z; or they’ll confuse similar sounds like t, p, and k.

A visual representation of hearing loss would most likely look something like this:

Visual impairment vs hearing impairment
(Pardon the title; it came with the graphic.)

 So, that leads us to the question: what does hearing with hearing aids or cochlear implants look like?

Well, the fundamental difference between the two is this: hearing aids amplify the sound input your inner ear and brain can perceive; cochlear implants replace your natural sound input with an electronic substitute.

I wish I could take credit for the concept behind this next illustration, but I saw it on another hearing loss blog eons ago, back in the mists of the time when people still used AOL and MySpace (much to the chagrin of humanity). I’ve replicated it below:

color wheel comparison

What sounds might “look” like with hearing aids and cochlear implants.

This, incidentally, is why you’ve got to be severely to profoundly deaf in order to qualify for a cochlear implant; the implant destroys what little residual hearing you have, and the substitute will be a “pixelated” version of the real thing. Implantation becomes a better trade-off when you don’t have enough usable residual hearing to begin with.

Deafness: Is It Really A Disability?

In college, I was taught about two approaches to deafness: the medical approach, and the cultural approach. Essentially, the medical approach regards deafness as something to be fixed or cured; the cultural approach regards deafness as something to be embraced and celebrated. Now, I won’t lie: after years of fighting to be “normal,” the Deaf community was a welcome respite that helped me solidify my identity outside of my hearing loss. But that niggling feeling remained: it wasn’t the whole story, especially when it came to job-hunting.

Deafness is pretty unique in that it’s one of the few disabilities that affords near-complete independence. We can drive, we can move around, we can hold down jobs in any physical and intellectual capacity. The only thing we– most of us– struggle to do is communicate in a hearing world.

Unfortunately, that last one is a pretty big deal, especially in networking and securing employment; or in seeking information and education. It’s much like being a perpetual foreigner– without communication, you miss out on language, social cues, and local culture. And not everyone is willing to accommodate, or they don’t know how.

In part, that’s what gave rise to Deaf culture. At various points throughout recent history, a bunch of deaf people got together, worked out their own communication and social norms, and out of it came a distinct language and culture. Over time, a social network for education and employment also developed– it wasn’t and still isn’t uncommon for Deaf people to find jobs in residential schools, ASL courses, and municipal social work.

Outside of those niches, however, our options become… more complicated. A whole lot of  service and sales professions– for example, reception, hospitality, and nursing– rely heavily on verbal communication. At least, as most people understand it. Mind you, several deaf people have found workarounds for succeeding in these types of jobs (many of whom are cuers!)*; often, their biggest challenge lay in convincing their employers that they could do it, albeit in a different way. Quite a few have just gone ahead and started successful businesses, notably in Austin, Texas.

These people, however, are a bit of a rarity.

A paradox: if deafness isn’t a disability in most senses of the word, then why do so many of us end up on SSDI? Or worse, straddling the poverty line?

Any objective measure comes up with two answers:

  1. Deaf people struggle to access secondary information in an auditory environment. We don’t usually overhear things like hearing people do; direct communication is how we learn and retain information. This has major implications for education.
  2. It’s harder to convince employers to hire and retain deaf employees at a living wage. We take longer to find jobs, and we get promoted at slower rates.

The best reconciliation I’ve heard for that paradox so far came from this Australian deaf blogger,** who defined deafness as a social disability. Once I thought of it that way, all those niggling pieces in my mind finally fell into place. See, one of my biggest hurdles in the Great 2014-2015 Job Search was networking at social events and job fairs. Imagine a patchwork conversation like this:

Me: So what kind of job do you do?
Them: Oh, I work at …. [unintelligible]
Me: Say again?
Them: [unintelligible] administrations at [unintelligible] in Dallas.
Me: Oooh. Administration? That sounds interesting.
Them: Yeah, we do a lot of paperwork and [unintelligible].

Not really a whole lot to work with, so the conversation peters out. And that happens everywhere: church, work, parties, social events. Building relationships is the whole point of networking, and how do you fluidly do that with persistent communication breakdowns?

The social model also explains why deaf people so often flourish in a variety of roles within deaf/disability/diversity-related occupations. Those occupations are designed to facilitate deaf-friendly communication, which in turn enables deaf people to build personal connections with coworkers, supervisors, and educators.

We’re not disabled, for the most part, unless our environment makes it that way.


*This does not include the relatively few professions where safety unequivocally relies on verbal communication, like armed services, police field work, and firefighting. I do know deaf people who work in these professions, but they tend to be in volunteer or support roles, not in active duty.

**Sadly, I lost the link to the Australian deaf blogger, because I suck. If anybody knows who I’m talking about, please feel free to drop me a line so I can credit him. It’s really an excellent article.

Where the Americans with Disabilities Act Falls Short

The ADA is a marvelous legislative tool. I have, however, noticed a disturbing tendency to rely on it as a blanket solution, or worse, a legal bludgeon. Here’s the thing. In practice, its compliance rests on three conditions:

1) the needed accommodations fall within the definition of “reasonable”
2) the owners are aware of the requirements
3) the owners are willing to comply

Now, larger businesses usually have the money and the sense to ensure that they’re accessible. Smaller businesses, especially those in older buildings… well, it depends.

Take those smoke alarms with flashing strobe lights, for example. A small-to-medium hotel may get a grand total of 1-2 deaf guests a year, if that. Legally, new hotels or hotels undergoing renovation are required to purchase and install strobe lights in a portion of their units, and ideally, they’d do so posthaste because, yannow, preventable death by immolation tends to puts a damper on business and common decency.

But. What if it’s a tiny bed-and-breakfast? What if the accessible rooms are full? Or the hotel is in an older building and hasn’t gotten around to updating the alarms? Or the hotel is newer, but for whatever reason, they aren’t ADA-compliant?

You could always file a Title III complaint. First, though, you’ve gotta collect proof of the discrimination (photos, written documentation, etc.), write a letter, and mail it to the US Department of Justice, where it will be filed along with, I don’t know, 10,000 or however many other cases they’re handling on that given day.

Or, better yet, you can have a sit-down with the owners and tell them, “Hey, just so you know, to make your rooms accessible to anyone with hearing loss– especially older tenants– and ensure their safety, you can swap out your smoke detectors with these alarms and write it off as a business expense. Plus, you’ll be in the clear legally. If finances or labor are a concern, you can start with the legal minimum required and swap out as you replace older alarms.”

However, neither option resolves your immediate problem when the hotel isn’t already ADA-compliant. Let’s say you’ve been driving for eight hours straight, you are in the middle of nowhere, you need a place to crash, and this hotel is the least seediest place in town. Not exactly a plethora of options there, unless there’s an appropriate alarm immediately available and they’re able and willing to install it right away.

Apartments are a bit more clear-cut, but again, that relies on 1) the owners’ and builders’ awareness of ADA requirements and 2) the owners’ and management’s willingness  to accommodate you, rather than give you a difficult time over the “reasonable” stipulation. (It also depends on whether you rent from an individual owner vs an apartment complex.) If and when the apartment does install flashing smoke alarms, you won’t be able to take it with you when you move. So, you’ll have to repeat the process all over again with the next place.

Instead of relying on others’ foresight and goodwill in being ADA-compliant, why not focus our energies on finding solutions that allow for accessibility on our own terms? I reference the flashing smoke alarm because it’s a perfect example for how that gap between ADA mandate and reality can be closed. Modern technology allows us to create portable smoke alarms that we can take with us when we move to a new place, visit a friend’s house, or travel. In fact, Gentex  and BRK both have their own lines of flashing smoke alarms that are portable. Although the ones I’ve looked at seem to be a bit cumbersome (and friends’ feedback confirm this), it’s a start.

This applies to captioning and interpreters as well. I’ve been looking and looking for a reliable transcription app, because straight up, finding a qualified interpreter or captionist– especially for nonprofits, churches, or volunteer events– is a royal pain in the ass. Typically, these organizations don’t have the funds to pay professional rates, and often, things come up last-minute.

While automatic speech recognition software right now isn’t perfect (often not even intelligible), a decent transcription program packaged as a phone app would make spoken presentations much more readily accessible when an interpreter can’t be secured due to funds or time. I can only dream of the day when automated speech transcription software will hit a point that it’s virtually error-free– in fact, I’ve noticed significant improvement on YouTube over the years.

The ability to create devices that can provide access, that we can use wherever and whenever we need it– that, to me, is true accessibility. Not going through a lengthy explanation with just about every vendor on what the ADA requires and hoping they don’t find some excuse to wiggle out of it, trying to determine who’s going to pay for it, or weighing if it’s worth the hassle.

Those Oh-So-Cool Signing Gloves

You know a video’s going viral when at least three people ask or tell me about it in as many days. 

On one hand*, COOL. And surely a good starting foundation for more advanced technology. On the other hand*, you know there’s a but coming…

  1. It doesn’t seem to address facial expression or body language, which are two essential components of sign language. Those two don’t just add flavor; they add meaning. Not sure how you can track these things just yet.
  2. “Pure” sign languages (i.e., ones that haven’t been adapted for speech) also tend to be more spatial than linear, plus the grammar is typically wildly different from spoken language. Even sign systems designed to transliterate speech generally don’t catch all components of spoken language, so I’d expect the voicing to be piecemeal at best.
  3. There’s no reverse translation; it’s sign-to-voice only, so it doesn’t make spoken language visually accessible for d/hh people.
  4.  The translation would probably be akin to running something through Google Translate– if not worse.

More than that, not all d/hh people know, use, or even prefer sign language. Even among signers, quite a few prefer to use transliteration rather than interpretation. Anecdotally, I’ve heard of captioning gaining far more popularity in colleges than sign language interpreting. Personally, while I have no qualms about using sign language to converse directly with other signers, I’ve had way too many interpreting mishaps to trust it for anything beyond basic conversation with English speakers. It’s far too much reliance on a third party’s understanding and expertise for my liking. I’m not that much more optimistic about a machine.

On the other other hand*, I could see these gloves working better for straight fingerspelling or Cued Speech, especially if they were combined with an automated transcription software. Unlike sign languages, cued speech has a finite set of eight handshapes that can be matched with a similarly finite selection of phonemes to produce words. I expect it’d sound incredibly robotic– which would certainly add an extra twist to the blog name, A Croaking Dalek— but there would likely be less potential for word jumble like what you’d get with ASL or Signed English. 


 * I promise these puns are completely unintentional.

If People Treated Vision Loss Like They Do Hearing Loss

“You need to be able to see in order to succeed.”

“Stare harder at this page. Stare harder.”

“Well, what if we just turned up the light a bit? Would that help? Yes, I know you said you can’t see at all, but what if…”

“Here!” *grabs arm* “Let me do everything for you. Just sit down over there. No, no, it’s fine. Obviously your eyesight is inextricably linked to the function of your arms and legs.”

*waves* “Can you see this?” *waves even more furiously* “What about THIS?” *transforms into human windmill* “How about THIIIS?!”

via GIPHY

“Have you thought about getting [insert expensive medical option that inevitably requires surgery and upkeep]?”

“I know Braille!” *punches random holes in piece of paper*

“But how could you possibly live without seeing the works of Michelangelo or Picasso?”

“It’s not our fault you crashed into that table even though we switched everything around the other day without telling you. You need to pay more attention.”


And now I’m going to bet within the week (nay, the day), someone’s going to leave a comment telling me that blind people do get these comments.

What’s the Bigger Picture That AGBell’s Missing With Nyle DiMarco?

The Deaf community’s in a furor over AGBell’s post on Nyle DiMarco (who was the winner of the last America’s Next Top Model season, and also Deaf, and also an astonishingly pretty man).

That post, in turn, was prompted by a Washington Post article, also on Nyle DiMarco, in which he advocates for ASL for d/hh children, calling it “their own language.” Matter of fact, the AGBell article identifies him as a political activist, pointing out that he’s started a foundation dedicated to promoting deaf infants’ access to American Sign Language.

So now that we are all caught up on the drama, here are the snippets that caused the kerfuffle. DISCLAIMER: This is not meant to be an anti-endorsement of AGBell, as I’ve actually found many of their proponents to be quite open-minded. Heck, I attended a Montessori school with their namesake, and there’s been a long-standing partnership between several AGBellers and Cued Speech. On this one, though, AGBell missed the mark.

1. Lip service to viable alternatives.

“The Alexander Graham Bell Association for the Deaf and Hard of Hearing (AG Bell), applauds DiMarco’s achievements and recognizes that ASL exists as a communication option for deaf children. However…”

Wince. As a native cuer, this opening strikes a bit too close to the pandering I’ve seen in regards to Cued Speech. “Oh, well, of course Cued Speech is an option, but… [insert spiel about how Our One True Way is so much better].” (D/HH educators, please do take note, and please don’t do this. After the parent/student/child has been around the block a few times, it gets old.)

2. Couched, cagey language. 

“While bilingualism (use of ASL along with spoken language) may be helpful to deaf children who are unable to fully achieve spoken language, a young child whose family desires spoken language often achieves their desired outcome better through a full immersion in spoken language.”

“may be helpful”? “often”? This doesn’t really do wonders for winning credibility among people that you’re purporting to advocate for. Of course a visual language/mode is going to help d/hh children “who are unable to fully achieve spoken language.” Children learn to speak/sign/cue largely through mimicry. It is exceedingly difficult to copy what you can’t clearly access to begin with. Not impossible, but… difficult.

3. Lack of data.

“Deaf children frequently communicate quite well with listening and spoken language alone, and the number of children who have a need for ASL has decreased dramatically.”

Well, hold on. Where are the cites for this? Earlier in the article, AGBell states that 90% of hearing families with d/hh children are opting for listening and spoken language, citing BEGINNINGS for Parents of Children who are Deaf and Hard of Hearing in North Carolina. So, my next questions: where and how was this data collected? And could you pretty please link to it?

4. Lack of empathy.

Come to think of it, in writing point 2, I realized that part of the Deaf resistance to AGBell’s stance stems from this seeming lack of acknowledgement for just. how. much. work. goes into a spoken-language-only approach, at least for the folks I know.

Heck, even one of the students that AGBell highlights goes into a lengthy description of the prep work he undergoes for high school and college classes. Which– hey– is life. No right or wrong about it: if that’s what it takes to get him where he wants to be, more power to him. (That said, I do worry a bit about burnout, having been there and done that in my senior year.) But come on, AGBell, don’t you realize that amount of extra work isn’t exactly normal? Maybe not even ideal? At what point do you hit diminishing returns?

5. Sound is put on this odd pedestal. 

“In videos available on AG Bell’s YouTube channel, families share the remarkable abilities of deaf children today—making music, singing songs, and participating fully in sports, theater and more, with wonderful speech and remarkable hearing.”

Look, I enjoy music– heck, I taught myself a few basic piano songs in elementary school– but this is giving off a weird “if you don’t engage in this specific interest, you’re irredeemably missing out” vibe. Plenty of signing d/hh people love music, with or without aids, and plenty of hearing people don’t listen to music regularly, if at all, so I’m pretty sure this is a variation in people-being-people and not so much level of hearing.

Furthermore: as numerous residential school programs and d/hh athletes/performers indicate, sound is not a requirement for being fully involved into extracurricular activities. Yes, when you’ve got a d/hh child in a hearing group, it helps massively– but you know what? So do visual accommodations and learning some kind of visual communication method, even if they’re just made-up signals. Come on, let’s see some middle ground here.

Let’s rephrase this: “In videos available on AG Bell’s YouTube channel, families share the remarkable abilities of blind children today–making art, drawing pictures, and participating fully in sports, theater and more…”

Are you getting why this sounds a bit odd? Am being odd? I mean, I love art, and I draw lots of pretty pictures, but I wouldn’t expect everyone to be into it, nor try to push visually impaired children into taking it up “because you need to SEEEEE.”* **

*If they want to pursue it, that’s one thing. In that case, I’mma be all let’s go find tactile paints or play-dough or whatever and figure out a way to make it work because yay art.

**Yes, I realize this isn’t a perfect analogy because deafness entails a two-pronged challenge– listening and speaking– whereas visual impairment is, well, mostly an eyesight issue, at least in the mainstream hearing world. Visual impairment in the Deaf world, sadly, carries very much the same challenges that d/hh people do in hearing settings, though.


Now, since this is primarily a Cued Speech blog… some cuers have wondered where the hell we fit into all of this. Well, we know ASL isn’t a deaf child’s “natural” language (despite Mr. DiMarco’s word choice), but we don’t neatly fit into AGBell’s box, either.

So, I close with an insightful comment from my friend Benjamin Lachman: “Everyone’s lost their damn minds. Any language can be effective provided it is accessible as early as possible. ASL is effective if there is exposure to fluent signers ASAP. English is effective if there’s auditory feedback and/or visual access (preferably both) as soon as possible. Period. That’s our [cuers] message, in my respectful opinion. Everybody else is just putting their collective feet in their collective mouths.”

 

The Art of Disclosure: when do you tell an interviewer you’re d/hh?

In all my nagging at other people for their opinions on when to disclose hearing loss, I’ve yet to find any other d/hh topic so cleanly divided between hearing and d/hh respondents. Hearing loss isn’t something you can really keep on the downlow, even if you’ve got amazing speechreading and speaking skills. At some point, it has to be disclosed. The question is when.

In asking around, I’ve found that the breakdown usually goes as follows:

Hearing viewpoint: Best to be upfront; it has to come out at some point. If they have a problem with it from the beginning, you probably won’t ever convince them otherwise. No point in wasting either your or their time.

D/HH viewpoint: Leave any mention of hearing loss off your resume. If they call, don’t tell them you’re deaf or have the relay interpreter introduce herself. Don’t say anything about it until you get an interview, preferably in-person. (Note: Some d/hh people in my network reported applying to hundreds of positions with no bites; when they removed any mention of their hearing loss—such as having attended a residential school for the deaf—they finally started getting responses. I’ve yet to hear about any of these responses ultimately ending in a job, however.)

My experience: When I job-hunted after graduation, I seemed to get more traction when I disclosed my hearing loss earlier rather than later. Now, I didn’t see any reason why it needed to go in my resume or my cover letter, unless being d/hh could lend strength to that position, like diversity or accessibility. Usually, I’d disclose it when we discussed setting up a phone or in-person interview, saying something like, “I use a relay service due to hearing difficulties.” I preferred to use “hearing difficulties” instead of “deaf” because 1) my cochlear implant does make me functionally hard-of-hearing, and 2) I thought it sounded a bit less intimidating. I also emphasized that visual/text communication could easily substitute for spoken communication. Usually, the response was that hearing loss wasn’t an issue in this job. (Of course, it’s not like they could’ve told me otherwise without incurring a massive HR headache, but they could’ve also opted to say nothing at all…)

Despite what my DVR counselors had advised me, I’d found that I hated surprising potential employers with that information at a phone or in-person interview. It felt awkward, there was always fumbling, and I didn’t feel like anyone were adequately prepared. About halfway through my job search, I decided to just be upfront about it when it came up. Yeah, it probably narrowed down my job opportunities, but at least they were narrowed down to employers that I could safely assume would be open to hiring d/hh people.

(What I did run into more than once was that millennial catch-22: “You’ve got great credentials and we’re very impressed with your writing, but we need someone with more experience.” Go figure.)

Now, I’ve talked to other d/hh people who have had different experiences. So, I invite you to come share in the comments.

 

 

“Cued Speech isn’t a language.”

…and it doesn’t need to be. Cued Speech is a communication mode that visually represents an existing language in real time. That’s why professionals make a distinction between Cued Speech, Cued English, and Cued language.

Sometimes I’ve heard that statement used as a put-down: supposedly, because Cued Speech isn’t a language in and of itself, it either can’t be used to instill language into children, or its input will be fragmentary at best. My experience says otherwise.

As a rough analogy, you could say the same about writing. Writing itself isn’t a language; it’s a way of representing language in another format. It codifies sound into print… mostly. (English is a stupid, stupid language.) Likewise, Cued Speech also codifies sound into a visual format, and much more faithfully than written English (again, stupid language). Both are valid, successful teaching tools and modes of communication.

I think part of the confusion comes from the frequent misidentification of Cued Speech as a variant of Visual Phonics. Unlike Visual Phonics, Cued Speech was tailored for smooth transitions between different handshapes and placements, which facilitates real-time communication– in other words, you can cue as you speak at a “normal” pace. As far as I know, that isn’t feasible with Visual Phonics and inhibits its use for immersive language acquisition. However, because the two systems’ basic premises are similar (i.e., visually convey the properties of sound), they often get lumped in with each other.

Why are Cochlear Implants Bad? A Primer

On The Horror of Cochlear Implants, Part 2, a Facebook acquaintance commented, “There must be some culture that I’m entirely outside of. Being able to make use of a sense that you otherwise could not is a bad thing somehow? Looks like I need to rethink my glasses…”

The traditional Deaf aversion to cochlear implants is baffling to most people. They can’t imagine why anyone wouldn’t jump at the chance to hear. Thing is, I understand that viewpoint. I don’t agree with it, but I understand it. The short answer is that it’s got a lot to do with the values that Deaf culture traditionally holds, most of which was shaped by events in the 20th century. The long answer… well, let’s dig right into the beginning.

***

Deafness fundamentally shapes the way you approach the world. More so if you lost your hearing at a young age. In the absence of a sensory input, the brain and body will compensate in other ways. Not quite Daredevil-style, but deaf brains do tend to rewire for heightened visual and tactile input. There are also some real interesting questions on how our brains process language. And apparently we have better peripheral vision too, by having more neurons in our eyeballs. In other words, I guess we’re… glorified chameleons?

funny-chameleon-1254493-640x480

Kinda like this, but sexier.

That kind of thing also leaks out into how we think, talk, and behave. Instead of “I heard him say…” I’ll say, “I saw that he said…” We hug and touch more. We’re blunter, because communication is hard enough to begin with, and dancing around the topic just makes it worse. (That bluntness has gotten me into trouble more often than I care to admit, incidentally). We tend to use more animated gestures and expressions. Oh, and we gravitate to light.

Kinda like… no. Exactly like this.

When deaf and hard-of-hearing people get together– especially at residential schools for the deaf– and find these commonalities in how they live and think and get shit done, they basically create their own language and communities that don’t really factor in sound at all. Modern technology has made that even more possible: video calls, flashing alerts, text and video messaging, emails.

(Matter of fact, Nicaraguan sign language is a modern example of this phenomenon. Prior to the 1970s, Nicaragua didn’t have a deaf community, nor an unified sign language; d/hh kids grew up with mostly hearing families and home-grown signs. Then someone threw a bunch of those kids together, made it into a school, added more kids… and over time, the kids developed a pidgin/creolized mishmash of their home signs. Years after the school started, an ASL researcher found that the younger students had not only copied the older students’ creolized sign language, but also sophisticated it further. If you don’t mind the paywall, here’s a link to the study: http://www.ethnologue.com/language/ncs

However, some people disagreed on the benefits of this cultural and linguistic autonomy, at least in the US. And a lot of those people tried to steer their next generation of d/hh children toward integration into mainstream hearing society, particularly in the late 1800’s to mid-1900’s– often to poor results, both socially and academically. Turns out, educating and integrating d/hh children based on sound instead of sight is a tad counterintuitive, especially when effective hearing aids and cochlear implants aren’t a thing yet. (And I haven’t touched on the numerous attempts to “cure” hearing loss, a lot of which did more harm than help.)

The rise of oralism in the early 20th century–using spoken language to teach d/hh students– created a domino effect. Residential schools for the deaf were downsized or closed; d/hh staff lost their jobs to hearing people who couldn’t sign; d/hh students were banned from using sign language– some punished by having their knuckles rapped bloody with a ruler, or slammed into drawers; the focus on speech rehabilitation overshadowed traditional studies like math, science, and the trades– and often at the cost of language development. The end result was social, educational, and ultimately career retardation for a large segment of the signing d/hh people nationwide. (Pro tip: want to learn some really rude signs? Bring up Alexander Graham Bell with a mainstreamed Deaf person over 30.)

There are books and books of history on this, but for starters, I’d suggest A Place of Their Own: Creating the Deaf Community in America by John Vickery Van Cleve and Barry A. Crouch; and Never The Twain Shall Meet: The Communications Debate by  Richard Winefield.

Keep in mind, much of this was done in the name of “normalizing” d/hh children. Tell generations of signing d/hh people that they’re broken, threaten their nexus of social interaction and networking (i.e. residential schools), punish them for using an intuitive language, stunt their social and academic development by hyper-focusing on the one ability they collectively lack instead of their strengths… and you have the perfect recipe for resentment and a general mistrust of outsiders’ attempts to “fix” or “help” deafness.

Only in the past few decades has this trend started to reverse, particularly after the recognition of ASL as a language in the 1960s, and even then it’s often been an uphill struggle. Cue in the mass adoption of the first cochlear implant in the early 1980s– new, experimental, requiring surgery, and with a variable success rate that depended on many factors to boot– and hackles went straight back up. “Oh, great, yet another attempt to turn us into something we’re not, and you want to cut into our skulls to do it.”

The cochlear implant isn’t a cure. But it was often marketed as such to hearing parents who didn’t know better, and more often than not, these parents weren’t made aware of American Sign Language or Deaf resources as an option. People being people, the controversy quickly devolved into an “us-vs-them” mentality– not entirely without cause, given recent history. And unfortunately, a lot of misconceptions on cochlear implants from those early days still persist.

Nowadays, while the Deaf view on cochlear implants has softened to accepting implants for adults, you’ll still see some resistance when discussing implantation for children. That’s a post for another time, but essentially, it boils down to the same central issue: stripping d/hh people of their cultural identity and linguistic access. While I don’t think implantation by itself results in that— quite the opposite, actually– I do consider it wise to evaluate the motivation behind advocating cochlear implantation. Giving the kid options? Sure. Expecting it to do the work for you, or make him “just like a hearing kid”? Not so hot.

So, there you have it. By nature, Deaf people have, for the most part, learned to adapt to a world using sight and touch, to the point that for many of us, sound is just not… a thing. It’s not in our mental landscape, at all. Some people choose to add sound to their toolbox through hearing aids and/or cochlear implants. Some don’t. The key element there is choice. When someone else tries to push that choice for us, whether that’s for or against implantation, that’s where things go awry. And it’s worse when that someone is perceived as an outsider pushing to eradicate the very thing that gave birth to your cultural and linguistic identity.