Before the pandemic, my husband and I would often meet our friends for a beer at the weekend. For many, grabbing a drink with friends is the epitome of a relaxing evening. But for deaf people in a hearing crowd, a pub can be a perfect storm of bad lighting, loud background noise and full mouths that make communication difficult. Sometimes, I enjoy myself and my friends, and I try to ensure that I can understand the conversation. But sometimes I am not in the mood for this work. I stare at my beer, let my eyes glaze over. I am there, but not there.
Deaf people have a term for the isolation that grows out of being surrounded by non-signing hearing people: ‘Dinner Table Syndrome’.
The dinner table, a symbol of family life and bonding in popular hearing culture, often represents loneliness and inaccessibility to deaf people. In the UK and US, 90% of deaf children are born to hearing parents, and the majority of those families don’t learn a signed language to communicate with their child. Dinner Table Syndrome describes the phenomenon in which “deaf people are perpetually left out of conversations”, says Dr Leah Geer Zarchy, a deaf associate professor of American Sign Language (ASL) and deaf studies at California State University, Sacramento. “If something is funny and everyone erupts in laughter, the deaf person will lean in to the closest person and ask what was so funny. More often than not they'll be told, ‘Oh, it was nothing’ or ‘I'll tell you later’. Except that later never comes.”
Automatic captions can be turned on in recorded video events, but the accuracy of those captions vary from ‘close enough’ to nonsensical (Credit: Alamy)
Inaccessibility at our literal or metaphorical dinner tables can lead to language loss, or even language deprivation, for deaf children. People learn language and get information not only from direct teaching but also indirect exposure, says Dr Jon Henner, a deaf assistant professor of professions in deafness at the University of North Carolina, Greensboro. But with multiple conversants and mouths full of food, the value of speech cues at a dinner table are limited. Family members also tend to move back and forth between topics quickly, making speechreading even more difficult, because there’s less contextual information.
The coping mechanism for deaf people at events like these is to disengage. “Often at events like these I would just go off and read,” says Henner. “A lot of us have stories like this.”
For the deaf community, interactions with non-signing friends, family members and colleagues have always contained barriers. But as our work and social lives have moved almost explicitly online due to the Covid-19 pandemic, these issues are exacerbated. As society develops new virtual means of working and living, deaf people are often left out of the conversation, further widening the inequality gap.
In March, when quarantine first began, there was a moment in which some members of the disabled community wondered whether the pandemic might finally be the great equaliser. Neurodiverse people and those with chronic illness were finally being allowed to demonstrate that working outside an office could be just as productive. And, since remote working with children at home meant many adopted flexible or non-traditional working hours, people eschewed phone calls for emails, another relief for the deaf and hard-of-hearing.
The dinner table, a symbol of family life and bonding in popular hearing culture, often represents loneliness and inaccessibility to deaf people
For my own part, I was a video-conferencing novice, having used Zoom only a handful of times before, and thought technology could actually come to our aid in large-scale conversation settings – could it be that work meetings or trivia nights might now be closed captioned? But video conferencing platforms are actually just another dark bar, another dinner table. However, unlike the table at the pub, disengagement isn’t an option as the majority of our lives are now played out on screens.
Dr Julie Hochgesang, deaf associate professor in the linguistics department at Gallaudet University in Washington DC, calls conversations that happen in these video-conferencing spaces “emboxed discourse”– they are shaped by the constraints of our screens. Hochgesang says emboxed discourse settings like Zoom can impede access, not only by perpetuating or worsening the standard “dinner table” barriers, but also creating new problems specifically for signed conversation.
As Hochgesang points out, often these issues aren’t bugs or malfunctions, but actual design features: Zoom’s auto-focus feature doesn’t know whether to focus in on the hands or face, often blurring a signer or causing a flashing-light effect in their background. The jumping of Zoom windows to follow sound does nothing for conversations among signers, and for conversations in which there is a deaf minority using an interpreter, the deaf person must “pin” the interpreter’s screen, foregoing the rest of the meetings’ participants. Their gestures, facial expressions and funny pet photo-bombs are all lost. At an in-person meeting with an interpreter, one can still see the other participants, but Zoom meetings bring deaf people back to the childhood dinner table, once again deprived of the discourse setting’s ambient information.
Video calls can impede access, not only by perpetuating or worsening standard barriers, but also creating new problems specifically for signed conversation (Credit: Alamy)
To caption a live Zoom event, one must get a typist or hire a third-party captioning service. Automatic captions can be turned on in recorded Zoom events, with the accuracy of those captions varying from ‘close enough’ to nonsensical, with transcriptions botching proper nouns or anything beyond standard conversational English. Like in Geer Zarchy’s dinnertime example in which the deaf child is promised to be filled in “later”, these retrospective recordings or transcripts often never materialise.
In a world where Zoom happy hours, trivia nights, weddings and birthdays now abound, Dinner Table Syndrome has also cropped up in mixed deaf and hearing company where communication strategies have previously been negotiated. In person, my hearing friends know to tap me on the shoulder to make sure I can see them clearly, to repeat themselves and to write out what I don’t understand. Online, though, things move quickly, hearing people cut out and talk over one another and the two-dimensional nature of an emboxed discourse setting makes speechreading even harder. It’s also unrealistic to put the financial burden of paid captions on private citizens in a strapped economy. But that doesn’t make the feelings of isolation any less painful.
Zoom’s auto-focus feature doesn’t know whether to focus in on the hands or face, often blurring a signer or causing a flashing-light effect in their background
Hochgesang says she worries about that our time spent in emboxed discourse settings might influence accessibility in the future, “that all the hard-won changes we’ve made in our access and rights as deaf people may be eroded or even lost… [and] that visual and tactile cues will change so much in a way that benefit hearing people and further strand deaf and deaf-blind people”.
Change, slowly but surely
In the meantime, deaf people must continue to do the invisible labour of advocating for our right to understand and be understood in these virtual spaces.
Over the past six months, deaf people have worked hard for access. Deaf educational audiologist Tina Childress started the website Connect-Hear to aggregate communication strategies for masked in-person and online interactions with deaf people, including technological workarounds to add captioning or circumvent platforms’ “features” detrimental to signed discourse.
The platforms themselves have also made adjustments. While Zoom has yet to incorporate in-house or free captioning, Microsoft Teams and Google Meet have integrated automatic captioning as a part of their platforms. Automatic captions, produced by artificial intelligence programmes, are far from perfect and can lull hearing hosts into a false sense of inclusion, but they are better than nothing. Zoom has made several changes to its paid education platform that allow teachers to pin deaf students (who won’t trigger the speaker view) and deaf students to pin both teacher and interpreter, though these are not available to the general public.
For conversations in which there is a deaf minority using an interpreter, the person must “pin” the interpreter’s screen, foregoing the rest of the participants (Credit: Alamy)
As we continue the fight for accessibility, one of the best ways a hearing person can be an ally is to take on some of this advocacy work. Asking, “Will this be captioned? Interpreted?” is a valuable reminder that inclusion isn’t just for deaf people – it’s for everybody. Volunteering to be a typist at a Zoom event, or donating or raising funds so an organiser can pay for third-party captions or an interpreter are also concrete ways to offer support.
Emboxed discourse settings don’t have to mean inequity for deaf and hard-of-hearing users. As Hochgesang points out, “deaf people excel at multimodal communication” because we do it daily as we navigate a hearing majority world, and video-conferencing and social networking tools are optimised for that. As it becomes increasingly standard for conversations to flow between verbal and written modalities, and even video, GIF and image-sharing, deaf people have the opportunity to thrive. That is, if hearing people will give us a seat at the table.