top of page

Areas of Research

Frequency Importance in Speech Recognition

The importance of speech cues at different frequencies varies across the spectrum. In individuals with normal audiometric thresholds, frequencies around 1 - 2 kHz tend to have the highest importance for speech recognition, but this trend does not hold true for individuals with hearing loss. As part of my post-doctoral training with Dr. Monita Chatterjee, I developed a method of estimating the importance of each channel for speech recognition on an individual basis. These estimates are important, because they allow us to characterize what factors can make a frequency region contribute more or less to speech recognition in various patient populations. My current research focuses on identifying novel methods of estimating access to important speech cues using speech tests that are already collected in the clinic, which will provide clinicians with more information to precisely tailor hearing assistive technology to the needs of each patient. This line of work benefits from collaboration with many other scientists, including Dr. Sarah Yoho at Utah State University, Drs. Ryan McCreery and Kaylah Lalonde at Boys Town, Dr. Doug Sladen at the University of British Columbia, and Dr. Emily Buss at University of North Carolina.

Factors Underlying Speech Recognition Outcomes For Patients with Hearing Loss

Several factors contribute to individual differences in speech recognition outcomes in patients with hearing loss, including auditory resolution and cognition. The goal of this line of research is to find a set of quick, easy to understand tasks that can assess individual differences in the variety of factors that limit speech recognition outcomes.  To achieve this goal, we test performance on a variety of psychophysical and cognitive tasks to identify the set of factors which predict individual differences in speech recognition. Once we know which factors to assess, we will be able to design diagnostic assessments to identify weaknesses in the factors that govern speech recognition outcomes, which would guide effective clinical intervention. This work benefits from collaborations with Drs. Angela AuBuchon at Boys Town, Monita Chatterjee at Northwestern University, and Dr. Varsha Rallapalli at University of South Florida. Examples of the tasks we use can be found here.

Working Memory in Difficult Listening Conditions

Individual differences in working memory predict speech recognition accuracy in some listening conditions, but not others. The goal of this line of research is to identify what aspects of difficult listening conditions determine whether individuals need to use working memory to facilitate speech recognition. For example, patients who lose their hearing and receive cochlear implants later in life must re-learn the mapping between auditory input and speech. Learning these mappings requires making and retrieving links between auditory stimuli and their corresponding speech interpretation, which means that individuals who have better working memory ability will be able to learn these mappings more quickly. To test different aspects of difficult listening conditions, we experimentally manipulate auditory stimuli to determine which manipulations affect working memory. This work benefits from collaboration with Dr. Angela AuBuchon at Boys Town.

Moving Research Online

An ongoing challenge of research with patient populations, including patients with cochlear implants, is limited access to patients who are able to travel to the lab to participate in research. As part of the Acoustical Society of America Remote Testing Task Force, my lab has spent the past year developing methodology for conducting research using online, at-home test methodology. We have developed a series of online research tools which are publicly available for use by other researchers.

Audio-Visual Spatial Integration

My doctoral dissertation focused on how visual spatial cues can alter auditory spatial perception, both in concurrently presented audio-visual stimuli (the ventriloquism effect) and in subsequent auditory stimuli (the ventriloquism aftereffect). This work was conducted at the University of Rochester under the direction of Drs. Gary Paige and Bill O'Neill. I no longer am actively engaged in this line of research, although I find new developments in this area fascinating.

©2021 by Adam Bosen. Proudly created with Wix.com

bottom of page