May 22, 2025

May 22, 2025

Building better captions, together

Written by:

Sep 1, 2023

Gopikrishnan Sasikumar

Gopikrishnan Sasikumar

This month, we conducted a co-design workshop to test and refine our live captioning prototype. Participants included individuals who are deaf, hard-of-hearing, or experience speech-related barriers. Work like this helps ground our tools in lived experience and improves usability before launch.

This month, we conducted a co-design workshop to test and refine our live captioning prototype. Participants included individuals who are deaf, hard-of-hearing, or experience speech-related barriers. Work like this helps ground our tools in lived experience and improves usability before launch.

This month, we conducted a co-design workshop to test and refine our live captioning prototype. Participants included individuals who are deaf, hard-of-hearing, or experience speech-related barriers. Work like this helps ground our tools in lived experience and improves usability before launch.

Live captioning tools are everywhere. They’re built into phones, conferencing apps, and meeting platforms. But most of them were never designed with the full range of human communication in mind. They expect clear audio, fluent English, a single speaker at a time.

They assume standardized speech, standard accents, and standard ways of interacting with technology. And when people don’t match those expectations, for example when they sign, switch languages mid-sentence, speak with irregular rhythms or regional inflections, the system fails.

That failure affects how people participate, how they respond, how they’re perceived, and how they’re left out. We wanted to understand this failure more deeply. And we wanted to build something better.

Listening to the people who live the problem

Sunva AI is our prototype for an inclusive, adaptive live captioning system. To adapt and improve it, we hosted a co-design workshop with seven participants from the deaf and hard-of-hearing community and individuals with speech irregularities.

Participants ranged in age from 20 to 35, and brought a mix of lived experiences. Some relied on Indian Sign Language. Others used hearing aids or lip reading. Some preferred text, others oral communication. Everyone had encountered barriers with existing captioning tools and came with sharp perspectives on what those tools got wrong, and what a better system could make possible.

Over the course of three hours, we moved through a set of design activities that surfaced real pain points and prioritized features based on everyday use.

Learn more about Sunva AI here: https://peopleplus.ai/sunva

Designing from the inside out

The first activity focused on the basics: what currently breaks.

Participants mapped out their experiences of captions moving too fast to follow, systems that garbled regional accents, and interfaces that couldn’t distinguish between speakers. They shared how certain words, even when technically accurate, felt emotionally off: Flattening tone, intent, or nuance.

Then we explored what would make a captioning system feel genuinely supportive. Using the MoSCoW method, participants ranked a set of potential features, including multilingual support, caption speed control, multi-speaker identification, and text simplification.

The priorities were clear. Users needed more than just accurate transcription. They needed systems that adapt to different communication contexts: Formal and casual, solo and group, slow and fast, monolingual and mixed.

Finally, we introduced the Sunva AI prototype and invited participants to engage in a “Love–Break–Fix” session. This part was both validating and eye-opening. Participants valued the real-time text correction and text-to-speech features already built into the system.

But they also flagged where it could go further –– from giving users more control over how their inputs are handled to refining the tone and delivery of TTS outputs based on context.

Why this matters

The co-design workshop gave us a clear roadmap grounded in everyday communication realities. Each activity revealed what inclusive captioning systems must be designed to handle: multilingual speech, overlapping speakers, varying reading speeds, and diverse communication styles.

Sunva AI is built to adapt. It brings together live captioning, simplified text, and context-aware speech tools in one system. It supports users who switch between modes (typing, speaking, signing) and who need captioning tools to respond to their pace, language, and intent.

This work is ongoing, and it’s deeply collaborative. Every feature we build reflects input from people navigating communication barriers in real time. This is how we’re shaping Sunva AI: Through shared authorship, practical design, and constant learning.

What’s next

We’re now improving the Sunva AI MVP based on this feedback. We’re preparing pilots in workplace settings, assessing feasibility for new features, and building benchmarks for performance, privacy, and fairness.

We’re also looking to collaborate.

If you’re part of the deaf or hard-of-hearing community, an AI researcher, a builder in the assistive tech space, or part of an NGO working on accessibility, we’d love to hear from you.

Sunva AI needs co-creators, testers, critics, and partners. If you’re interested in collaborating, adopting, or giving feedback, reply to this email or reach out here: gks@peopleplus.ai

Learn more about Sunva AI here: https://peopleplus.ai/sunva

Join the Community

People+ai is an EkStep Foundation initiative. Our work is designed around the belief that technology, especially ai, will cause paradigm shifts that can help India & its people reach their potential.

An EkStep Foundation Initiative

Join the Community

People+ai is an EkStep Foundation initiative. Our work is designed around the belief that technology, especially ai, will cause paradigm shifts that can help India & its people reach their potential.

An EkStep Foundation Initiative

Join the Community

People+ai is an EkStep Foundation initiative. Our work is designed around the belief that technology, especially ai, will cause paradigm shifts that can help India & its people reach their potential.

An EkStep Foundation Initiative