e-HAIL Event

AI-Assisted Augmented Reality Captioning Approaches to Assist Patients with Hearing Loss

Dhruv Jain, Ph.D.Assistant Professor of Electrical Engineering and Computer Science and Assistant Professor of InformationU-M College of Engineering and U-M School of InformationMichael McKee, M.D., M.P.H.Associate Professor of Family Medicine and Associate Professor of Physical Medicine and RehabilitationU-M Medical School
WHERE:
Remote/Virtual
SHARE:

People with hearing loss face challenges communicating with healthcare providers which impacts their ability to receive timely and quality healthcare. While services like sign-language interpreters and CART providers may aid communication, they are expensive, not always available, and may hinder patient privacy (due to the presence of another person). Our team (Dr. Michael McKee and I) is investigating a fundamentally different and novel approach: using the emerging augmented reality and automatic speech recognition technologies to display captions directly in front of the user’s gaze. Compared to a non-AR approach (e.g., automated speech recognition on a laptop), the use of AR will allow the captions to be placed in the 3D space near a speaker, thereby easing communication by enabling the subject to directly face the speaker and access facial and gestural cues.

We will study three AR-based approaches and compare performance:

(1) a head-mounted display-based approach developed in my lab

(2) a smartphone-based approach (e.g., using Apple ARKit) that the user can hold to see captions in front of the speaker, and

(3) a custom two-person facing device with built-in speech recognition that we will procure from Google.

Zoom information will be shared with e-HAIL members.

Organizer

J. Henrike Florusbosch