TED Talk and Captioning

It’s finally been released: a TED talk on Cued Speech by Cathy Rasmussen.

Now, a fellow cuer, Benjamin Lachman, posted the video to our Facebook page and asked for some crowdsourcing on adding accurate captions. Another cuer, Aaron Rose, took him up on that request and that link up there on the amara.org website now has accurate captions– although for some reason, the direct Youtube link still transcribes Cued Speech as “cute speech” among other things (which admittedly makes me giggle).

For me, just seeing that request made me think of possibilities for captioning Cued Speech videos. See, I’ve captioned for sign language videos before, both my own and others. Captioning is not extraordinarily difficult, but it can be very time-consuming. Essentially, you’ve got to break up the caption lines and align them with the correct timestamps, and this entails a lot of right-clicking the frame and watching mouth movements to make sure you end on the right word. It’s even trickier when you have to translate the content into a different language, and a phrase in the original language doesn’t match up with the timing for the captioned language. This applies even when you’re the one who produced the content.

But with Cued Speech, I think seeing the handshapes with the mouth would help facilitate that process, especially when combined with speech recognition software that will automatically sync a pre-uploaded transcript with the correct timestamps. It would also enable other cuers to contribute captions to the video (as Aaron did) without any discrepancies in interpretation, because it’s straight-up transliteration. Not to mention, it would be excellent cue-reading practice for budding cuers.

It’s kind of exciting to think how accessible Cued Speech videos can be with the captioning process. In that kind of work, every little bit to make it easier helps.