Δευτέρα 21 Οκτωβρίου 2019

Assessing the Effect of Middle Ear Effusions on Wideband Acoustic Immittance Using Optical Coherence Tomography
Objectives: Wideband acoustic immittance (WAI) noninvasively assesses middle ear function by measuring the sound conduction over a range of audible frequencies. Although several studies have shown the potential of WAI for detecting the presence of middle ear effusions (MEEs), determining the effects of MEE type and amount on WAI in vivo has been challenging due to the anatomical location of middle ear cavity. The purpose of this study is to correlate WAI measurements with physical characteristics of the middle ear and MEEs determined by optical coherence tomography (OCT), a noninvasive optical imaging technique. Design: Sixteen pediatric subjects (average age of 7 ± 4 years) were recruited from the primary care clinic at Carle Foundation Hospital (Urbana, IL). A total of 22 ears (normal: 15 ears, otitis media with effusion: 6 ears, and acute otitis media: 1 ear, based on physician’s diagnosis) were examined via standard otoscopy, tympanometry, OCT imaging, and WAI measurements in a busy, community-based clinical setting. Cross-sectional OCT images were analyzed to quantitatively assess the presence, type (relative turbidity based on the amount of scattering), and amount (relative fluid level) of MEEs. These OCT metrics were utilized to categorize subject ears into no MEE (control), biofilm without a MEE, serous-scant, serous-severe, mucoid-scant, and mucoid-severe MEE groups. The absorbance levels in each group were statistically evaluated at α = 0.05. Results: The absorbance of the control group showed a similar trend when compared with a pediatric normative dataset, and the presence of an MEE generally decreased the power absorbance. The mucoid MEE group showed significantly less power absorbance from 2.74 to 4.73 kHz (p < 0.05) when compared with the serous MEE group, possibly due to the greater mass impeding the middle ear system. Similarly, the greater amount of middle ear fluid contributed to the lower power absorbance from 1.92 to 2.37 kHz (p < 0.05), when compared with smaller amounts of fluid. As expected, the MEEs with scant fluid only significantly affected the power absorbance at frequencies greater than 4.85 kHz. A large variance in the power absorbance was observed between 2 and 5 kHz, suggesting the dependence on both the type and amount of MEE. Conclusions: Physical characteristics of the middle ear and MEEs quantified from noninvasive OCT images can be helpful to understand abnormal WAI measurements. Mucoid MEEs decrease the power absorbance more than serous MEEs, and the greater amounts of MEE decreases the power absorbance, especially at higher (>2 kHz) frequencies. As both the type and amount of MEE can significantly affect WAI measurements, further investigations to correlate acoustic measurements with physical characteristics of middle ear conditions in vivo is needed. ACKNOWLEDGMENTS: The authors thank Paula Bradley, Alexandra Almasov, and Deveine Toney from the Carle Research Office at Carle Foundation Hospital, Urbana, IL, for their help with IRB protocol management, and subject consenting and assenting. The authors acknowledge Dr. Ada C. K. Sum, Dr. Neena Tripathy, and Dr. Stephanie A. Schroeder from the Department of Pediatrics at Carle Foundation Hospital for their help in subject recruitment. The authors also thank the nursing staff in the Department of Pediatrics at Carle Foundation Hospital for their clinical assistance. Finally, the authors acknowledge Dr. Navid Shahnaz for providing pediatric normative wideband reflectance datasets published in Beers et al. (2010). This research was funded in part by a Bioengineering Research Partnership grant from the National Institute for Biomedical Imaging and Bioengineering at the National Institutes of Health (R01 EB013723, S.A.B.). J.W. designed experiments, collected and analyzed data, and drafted the paper; G.L.M. designed experiments, collected data, and edited the paper; P.-C.H. collected data and edited the paper; M.C.H., M.A.N., and R.G.P. collected data and reviewed and edited the paper; E.J.C. and R.B. generated and managed IRB protocol and edited the paper; S.A.B. designed experiments, analyzed data, reviewed and edited the paper, and obtained funding for the study. S.A.B. is a co-founder and Chief Medical Officer of PhotoniCare, Inc.. M.A.N. has equity interest in and serves on the clinical advisory board of PhotoniCare, Inc. Received September 18, 2018; accepted July 29, 2019. Address for correspondence: Stephen A. Boppart, Beckman Institute for Advanced Science and Technology, 405 N. Mathews Ave., Urbana, IL 61801, USA. E-mail: boppart@illinois.edu This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Minimal and Mild Hearing Loss in Children: Association with Auditory Perception, Cognition, and Communication Problems
Objectives: “Minimal” and “mild” hearing loss are the most common but least understood forms of hearing loss in children. Children with better ear hearing level as low as 30 dB HL have a global language impairment and, according to the World Health Organization, a “disabling level of hearing loss.” We examined in a population of 6- to 11-year-olds how hearing level ≤40.0 dB HL (1 and 4 kHz pure-tone average, PTA, threshold) is related to auditory perception, cognition, and communication. Design: School children (n = 1638) were recruited in 4 centers across the United Kingdom. They completed a battery of hearing (audiometry, filter width, temporal envelope, speech-in-noise) and cognitive (IQ, attention, verbal memory, receptive language, reading) tests. Caregivers assessed their children’s communication and listening skills. Children included in this study (702 male; 752 female) had 4 reliable tone thresholds (1, 4 kHz each ear), and no caregiver reported medical or intellectual disorder. Normal-hearing children (n = 1124, 77.1%) had all 4 thresholds and PTA <15 dB HL. Children with ≥15 dB HL for at least 1 threshold, and PTA <20 dB (n = 245, 16.8%) had minimal hearing loss. Children with 20 ≤PTA <40 dB HL (n = 88, 6.0%) had mild hearing loss. Interaural asymmetric hearing loss (|left PTA − right PTA| ≥10 dB) was found in 28.9% of those with minimal and 39.8% of those with mild hearing loss. Results: Speech perception in noise, indexed by vowel–consonant–vowel pseudoword repetition in speech-modulated noise, was impaired in children with minimal and mild hearing loss, relative to normal-hearing children. Effect size was largest (d = 0.63) in asymmetric mild hearing loss and smallest (d = 0.21) in symmetric minimal hearing loss. Spectral (filter width) and temporal (backward masking) perceptions were impaired in children with both forms of hearing loss, but suprathreshold perception generally related only weakly to PTA. Speech-in-noise (nonsense syllables) and language (pseudoword repetition) were also impaired in both forms of hearing loss and correlated more strongly with PTA. Children with mild hearing loss were additionally impaired in working memory (digit span) and reading, and generally performed more poorly than those with minimal loss. Asymmetric hearing loss produced as much impairment overall on both auditory and cognitive tasks as symmetric hearing loss. Nonverbal IQ, attention, and caregiver-rated listening and communication were not significantly impaired in children with hearing loss. Modeling suggested that 15 dB HL is objectively an appropriate lower audibility limit for diagnosis of hearing loss. Conclusions: Hearing loss between 15 and 30 dB PTA is, at ~20%, much more prevalent in 6- to 11-year-old children than most current estimates. Key aspects of auditory and cognitive skills are impaired in both symmetric and asymmetric minimal and mild hearing loss. Hearing loss <30 dB HL is most closely related to speech perception in noise, and to cognitive abilities underpinning language and reading. The results suggest wider use of speech-in-noise measures to diagnose and assess management of hearing loss and reduction of the clinical hearing loss threshold for children to 15 dB HL. ACKNOWLEDGMENTS: This research was generously supported by the intramural programme of the MRC, the Nottingham University Hospitals National Health Service Trust, The Oticon Foundation and, during analysis and manuscript preparation, NIH Grant R01DC014078, Cincinnati Children’s Hospital Research Foundation, and the NIHR Manchester Biomedical Research Centre. Our gratitude is extended to Alison Riley, senior audiologist, and Mark Edmondson-Jones, statistician, during data gathering and early analysis of the data. Sonia Ratib, research administrative manager, and five research assistants (Karen Baker, Nicola Bergin, Ruth Lewis, Leanne Mattu, Anna Phillips) collected data from the regional test centers over about a 1-year period. The senior personnel in those centers (Veronica Kennedy, Juan Mora, and Kelvin Wakeham) generously provided their facilities and help with the study. IHR technical and support staff provided substantial assistance with the project. We particularly acknowledge the contributions of Tim Folkard, Victor Chilekwa, Dave Bullock, and John Chambers. Mark Lutman (University of Southampton) provided software and advice for the automated audiological screen. Lisa Hunter and Dan Sanes engaged DRM in long discussion about the results of this study and strongly encouraged us to publish them. David Moore is supported by the NIHR Manchester Biomedical Research Centre. Finally, we would like to thank all the children, their caregivers and the schools who participated in this study. D.R.M. is the principal investigator leading the planning, funding, design, and write-up of the study. O.Z. participated in the statistical analysis, interpretation and presentation of this study. M.A.F. participated as executive investigator, coordinating all aspects of planning and running the study, and being integrally involved in the design, analysis, and write-up. The authors have no conflicts of interest to disclose. Received December 18, 2018; accepted August 6, 2019. Address for correspondence: David R. Moore, Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, ML15008, Room S1.603, 3333 Burnet Ave, Cincinnati, OH 45229, USA. E-mail: david.moore2@cchmc.org Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Recording Obligatory Cortical Auditory Evoked Potentials in Infants: Quantitative Information on Feasibility and Parent Acceptability
Objectives: With the advent of newborn hearing screening and early intervention, there is a growing interest in using supra-threshold obligatory cortical auditory evoked potentials (CAEPs) to complement established pediatric clinical test procedures. The aim of this study was to assess the feasibility, and parent acceptability, of recording infant CAEPs. Design: Typically developing infants (n = 104) who had passed newborn hearing screening and whose parents expressed no hearing concerns were recruited. Testing was not possible in 6 infants, leaving 98, age range 5 to 39 weeks (mean age = 21.9, SD = 9.4). Three short duration speech-like stimuli (/m/, /g/, /t/) were presented at 65 dB SPL via a loudspeaker at 0° azimuth. Three criteria were used to assess clinical feasibility: (i) median test duration <30 min, (ii) >90% completion rate in a single test session, and (iii) >90% response detection for each stimulus. We also recorded response amplitude, latency, and CAEP signal to noise ratio. Response amplitudes and residual noise levels were compared for Fpz (n = 56) and Cz (n = 42) noninverting electrode locations. Parental acceptability was based on an 8-item questionnaire (7-point scale, 1 being best). In addition, we explored the patient experience in semistructured telephone interviews with seven families. Results: The median time taken to complete 2 runs for 3 stimuli, including preparation, was 27 min (range 17 to 59 min). Of the 104 infants, 98 (94%) were in an appropriate behavioral state for testing. A further 7 became restless during testing and their results were classified as “inconclusive.” In the remaining 91 infants, CAEPs were detected in every case with normal bilateral tympanograms. Detection of CAEPs in response to /m/, /g/, and /t/ in these individuals was 86%, 100%, and 92%, respectively. Residual noise levels and CAEP amplitudes were higher for Cz electrode recordings. Mean scores on the acceptability questionnaire ranged from 1.1 to 2.6. Analysis of interviews indicated that parents found CAEP testing to be a positive experience and recognized the benefit of having an assessment procedure that uses conversational level speech stimuli. Conclusions: Test duration, completion rates, and response detection rates met (or were close to) our feasibility targets, and parent acceptability was high. CAEPs have the potential to supplement existing practice in 3- to 9-month olds. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: Funded by a strategic investment grant from Manchester University Hospitals NHS Foundation Trust. The study was supported by the NIHR Manchester Biomedical Research Centre and the NIHR Manchester Clinical Research Facility. Ruth Nassar and Anne-Marie Dickinson, pediatric research audiologists, recruited the participants and collected the data. Laura McNerlin conducted parent interviews. Michael Maslin contributed to initial data analysis. K.J.M., S.C.P., K.U., R.B., I.A.B. designed the experiment; A.V. contributed to data collection; K.J.M., S.C.P., A.V., M.A.S., B.V.D., and A.M. analyzed the data; K.J.M., S.C.P., A.V., B.V.D. wrote the article; all authors discussed the results and contributed to revision of the manuscript. B.V.D. is employed at National Acoustic Laboratories (NAL). The HEARLab is a product developed by NAL, part of a government statutory authority, and the Hearing Cooperative Research Centre (HEARing CRC). At the time of testing, the technology was licensed to Frye Electronics. Received November 12, 2018; accepted July 15, 2019. Address for correspondence: Kevin J. Munro, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, M13 9PL, United Kingdom. E-mail: kevin.munro@manchester.ac.uk This is an open access article distributed under the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Maximum Output and Low-Frequency Limitations of B71 and B81 Clinical Bone Vibrators: Implications for Vestibular Evoked Potentials
Objectives: Bone-conducted vestibular evoked myogenic potentials (VEMPs) are tuned to have their maximum amplitude in response to tone bursts at or below 250 Hz. The low-frequency limitations of clinical bone vibrators have not been established for transient, tone burst stimuli at frequencies that are optimal for eliciting VEMPs. Design: Tone bursts with frequencies of 250 to 2000 Hz were delivered to B71 and B81 bone vibrators and their output was examined using an artificial mastoid. The lower-frequency limit of the transducers was evaluated by examining the spectral output of the bone vibrators. Maximum output levels were evaluated by measuring input–output functions across a range of stimulus levels. Results: Both the B71 and B81 could produce transient tone bursts with frequency as low as 400 Hz. However, tone bursts with frequencies of 250 and 315 Hz resulted in output with peak spectral energy at approximately 400 Hz. From 500 to 2000 Hz, maximum output levels within the linear range were between 120 and 128 dB peak force level. The newer B81 bone vibrator had a maximum output approximately 5 dB higher than the B71 at several frequencies. Conclusions: These findings demonstrate that both transducers can reach levels appropriate to elicit bone-conducted VEMPs, but the low-frequency limitations of these clinical bone vibrators limit tone burst frequency to approximately 400 Hz when attempting to stimulate the otolith organs via tone bursts. ACKNOWLEDGMENTS: Portions of the data reported here partially fulfilled requirements for an AuD class (A.A., V.B., M.C., E.S., A.T., and V.W.). Funding for this research was provided, in part, by research grants from the James Madison University College of Health and Behavioral Studies (C.G.C. and E.G.P.) and from the James Madison University Provost’s Office (C.G.C. and E.G.P.). All authors designed and performed the research. All authors analyzed the data. C.G.C. and E.G.P. wrote the paper. The authors have no conflicts of interest to disclose. Received May 15, 2019; accepted August 8, 2019. Address for correspondence: Christopher G. Clinard, Department of Communication Sciences and Disorders, James Madison University, 235 MLK Jr. Way, MSC 4304, HBS 1024, Harrisonburg, VA 22807, USA. E-mail: clinarcg@jmu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Auditory Training Supports Auditory Rehabilitation: A State-of-the-Art Review
Objectives: Auditory training (AT), which is active listening to various auditory stimuli, aims to improve auditory skills. There is evidence that AT can be used as a tool in auditory rehabilitation to improve speech perception and other auditory cognitive skills in individuals with hearing impairment. The present state-of-the-art review examines the effect of AT on communication abilities in individuals with hearing impairment. In particular, transfer of AT effects on performance in untrained speech perception tasks was evaluated. Design: PubMed, Medline, and Web of Science databases were searched using combinations of key words with restriction to the publication date from December 2012 until December 2018. The participant, intervention, control, outcome, and study design criteria were used for the inclusion of articles. Only studies comparing effects in an intervention group to a control group were considered. The target group included individuals with a mild to moderately severe hearing impairment, with and without hearing-aid experience. Out of 265 article abstracts reviewed, 16 met the predefined criteria and were taken for review. Results: The majority of studies that were included in this state of- the-art review report at least one outcome measure that shows an improvement in non-trained tasks after a period of intense AT. However, observed shortcomings are that a comparison between studies remains difficult as training benefits were assessed with various outcome measures. Also, the sustainability of training benefits was not investigated sufficiently. Conclusions: Recent evidence suggests that intensive auditory (-cognitive) training protocols are a valid tool to improve auditory communication skills. Individuals with hearing impairment seem to benefit the most using a combination of sensory rehabilitation with hearing aids and AT to enhance auditory rehabilitation. Long term benefits of AT are still not consistently observed and should be in the focus of future research. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank Jena Anne Schnittker for proof-reading the manuscript. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. M.S., J.B., and S.L. were responsible for conception and design of the project. M.S. conducted the literature research, article screening, data extraction, and wrote the manuscript. J.B. supported the article screening, provided expert review on the analysis, and added valuable content to the manuscript. S.L. added important intellectual content to the project and critically reviewed the manuscript. The authors have no conflicts of interest to disclose. Received December 14, 2018; accepted August 16, 2019. Address for correspondence: Maren Stropahl, Department of Science and Technology, Sonova AG, Laubisruetistrasse 28, 8712 Staefa, Switzerland. E-mail: maren.stropahl@sonova.com This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Interaction Between Electric and Acoustic Stimulation Influences Speech Perception in Ipsilateral EAS Users
Objectives: The aim of this study was to determine electric-acoustic masking in cochlear implant users with ipsilateral residual hearing and different electrode insertion depths and to investigate the influence on speech reception. The effects of different fitting strategies—meet, overlap, and a newly developed masking adjusted fitting (UNMASKfit)—on speech reception are compared. If electric-acoustic masking has a detrimental effect on speech reception, the individualized UNMASKfit map might be able to reduce masking and thereby enhance speech reception. Design: Fifteen experienced MED-EL Flex electrode recipients with ipsilateral residual hearing participated in a crosssover design study using three fitting strategies for 4 weeks each. The following strategies were compared: (1) a meet fitting, dividing the frequency range between electric and acoustic stimulation, (2) an overlap fitting, delivering part of the frequency range both acoustically and electrically, and (3) the UNMASKfit, reducing the electric stimulation according to the individual electric-on-acoustic masking strength. A psychoacoustic masking procedure was used to measure the changes in acoustic thresholds due to the presence of electric maskers. Speech reception was measured in noise with the Oldenburg Matrix Sentence test. Results: Behavioral thresholds of acoustic probe tones were significantly elevated in the presence of electric maskers. A maximum of masking was observed when the difference in location between the electric and acoustic stimulation was around one octave in place frequency. Speech reception scores and strength of masking showed a dependency on residual hearing, and speech reception was significantly reduced in the overlap fitting strategy. Electric- acoustic stimulation significantly improved speech reception over electric stimulation alone, with a tendency toward a larger benefit with the UNMASKfit map. In addition, masking was significantly inversely correlated to the speech reception performance difference between the overlap and the meet fitting. Conclusions: (1) This study confirmed the interaction between ipsilateral electric and acoustic stimulation in a psychoacoustic masking experiment. (2) The overlap fitting yielded poorer speech reception performance in stationary noise especially in subjects with strong masking. (3) The newly developed UNMASKfit strategy yielded similar speech reception thresholds with an enhanced acoustic benefit, while at the same time reducing the electric stimulation. This could be beneficial in the long-term if applied as a standard fitting, as hair cells are exposed to less possibly adverse electric stimulation. In this study, the UNMASKfit allowed the participants a better use of their natural hearing even after 1 month of adaptation. It might be feasible to transfer these results to the clinic, by fitting patients with the UNMASKfit upon their first fitting appointment, so that longer adaptation times can further improve speech reception. ACKNOWLEDGMENTS: The authors thank the subjects who dedicated their time and effort to this study. W.N. and M.I. received funding for this research from the DFG (German Research Foundation) Cluster of Excellence EXC 1077/1 ‘Hearing4all,' the DFG project Number 396932747 and MED-EL Medical Electronics. M.I. designed and performed experiments, co-designed the fitting rule, analyzed data and wrote the article. W.N. designed the experiments, co-designed the fitting rule, provided analysis, and contributed to the writing of the article. B.K. designed the experiments, provided analysis, and critical revision of the manuscript. T.L. and A.B. provided critical revision for experimental design and manuscript. The remaining authors have no conflicts of interest to disclose. Received March 12, 2019; accepted August 30, 2019. Address for correspondence: Marina Imsiecke, Deutsches HörZentrum Hannover, Department of Otorhinolaryngology, Karl-Wiechert-Allee 3, 30625 Hannover, Germany. E-mail: imsiecke.marina@mh-hannover.de This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Detection and Attention for Auditory, Visual, and Audiovisual Speech in Children with Hearing Loss
Objectives: Efficient multisensory speech detection is critical for children who must quickly detect/encode a rapid stream of speech to participate in conversations and have access to the audiovisual cues that underpin speech and language development, yet multisensory speech detection remains understudied in children with hearing loss (CHL). This research assessed detection, along with vigilant/goal-directed attention, for multisensory versus unisensory speech in CHL versus children with normal hearing (CNH). Design: Participants were 60 CHL who used hearing aids and communicated successfully aurally/orally and 60 age-matched CNH. Simple response times determined how quickly children could detect a preidentified easy-to-hear stimulus (70 dB SPL, utterance “buh” presented in auditory only [A], visual only [V], or audiovisual [AV] modes). The V mode formed two facial conditions: static versus dynamic face. Faster detection for multisensory (AV) than unisensory (A or V) input indicates multisensory facilitation. We assessed mean responses and faster versus slower responses (defined by first versus third quartiles of response-time distributions), which were respectively conceptualized as: faster responses (first quartile) reflect efficient detection with efficient vigilant/goal-directed attention and slower responses (third quartile) reflect less efficient detection associated with attentional lapses. Finally, we studied associations between these results and personal characteristics of CHL. Results: Unisensory A versus V modes: Both groups showed better detection and attention for A than V input. The A input more readily captured children’s attention and minimized attentional lapses, which supports A-bound processing even by CHL who were processing low fidelity A input. CNH and CHL did not differ in ability to detect A input at conversational speech level. Multisensory AV versus A modes: Both groups showed better detection and attention for AV than A input. The advantage for AV input was facial effect (both static and dynamic faces), a pattern suggesting that communication is a social interaction that is more than just words. Attention did not differ between groups; detection was faster in CHL than CNH for AV input, but not for A input. Associations between personal characteristics/degree of hearing loss of CHL and results: CHL with greatest deficits in detection of V input had poorest word recognition skills and CHL with greatest reduction of attentional lapses from AV input had poorest vocabulary skills. Both outcomes are consistent with the idea that CHL who are processing low fidelity A input depend disproportionately on V and AV input to learn to identify words and associate them with concepts. As CHL aged, attention to V input improved. Degree of HL did not influence results. Conclusions: Understanding speech—a daily challenge for CHL—is a complex task that demands efficient detection of and attention to AV speech cues. Our results support the clinical importance of multisensory approaches to understand and advance spoken communication by CHL. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank Dr. Nancy Tye-Murray, Washington University School of Medicine (WUSM), for supervising data collection in CHL, the children and parents who participated, and the research staff who assisted: Aisha Aguilera, Carissa Dees, Nina Dinh, Nadia Dunkerton, Derek Hammons, Scott Hawkins, Brittany Hernandez, Demi Krieger, Rachel Parra McAlpine, Michelle McNeal, Jeffrey Okonye, and Kimberly Periman of UT-D (data collection, analysis, stimuli editing, computer programming) and Drs. Nancy Tye-Murray and Brent Spehar, WUSM (stimuli recording, editing). Supported by the NIDCD, grant DC-00421 to University of Texas at Dallas (UT-D). Dr. Abdi acknowledges the support of an EURIAS fellowship at the Paris Institute for Advanced Studies (France), with the support of the European Union’s 7th Framework Program for research, and funding from the French State managed by the “Agence Nationale de la Recherche (program: Investissements d’avenir, ANR-11-LABX-0027-01 Labex RFIEA+).” The authors have no conflicts of interest to disclose. Received September 25, 2018; accepted July 22, 2019. Address for correspondence: Susan Jerger, School of Behavioral Brain Sciences, GR4.1, University of Texas Dallas, 800 W. Campbell Rd, Richardson, TX 75080, USA. E-mail: sjerger@utdallas.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
The Apolipoprotein Allele and Sensorineural Hearing Loss in Older Community-Dwelling Adults in Australia
Objectives: Previous research has investigated whether the apolipoprotein E (APOE) ε4 allele, which is associated with an increased risk of cognitive decline, is also associated with hearing loss in older people. Results of the very limited research to date are conflicting, and sample sizes for all but one study were small. The present study aimed to investigate whether there is an association between the APOE ε4 allele and hearing loss in a large, population-based sample of community-dwelling older adults. Design: Cross-sectional audiometric data on hearing levels and APOE genotypes for 2006 participants (aged 55 to 85 years) of the Hunter Community Study were analyzed using multiple linear regression to examine the association between APOE ε4 carrier status and the 4-frequency pure-tone average (0.5 to 4 kHz) in the better hearing ear, and also across individual frequencies in the better ear. Results: Observed and expected APOE allele frequency distributions did not differ significantly overall from established general population allele frequency distributions. Unadjusted modeling using better ear pure-tone average showed a statistically significant association between APOE ε4 allele status (0, 1, 2 copies) and reduced hearing loss, but when the model was adjusted for age, this was no longer statistically significant. Across individual hearing frequencies, unadjusted regression modeling showed APOE ε4 status was significantly associated with a reduction in mean hearing thresholds at 1 and 2 kHz, but again this effect was no longer statistically significant after adjusting for age. Conclusions: The results of this study did not provide any evidence of a statistically significant association between APOE ε4 allele status and hearing loss for older adults. Further investigation of the effect of homozygous carrier status on hearing thresholds is required. ACKNOWLEDGMENTS: The authors thank the funding bodies, chief investigators, research staff, and particularly the participants of the Hunter Community Study. Supported by the University of Newcastle and the Extending Treatments, Education and Networks study, funded by the Hunter Medical Research Institute and Xstrata Coal. The authors have no conflicts of interest to disclose. Received February 12, 2019; accepted July 5, 2019. Address for correspondence: Julia Z. Sarant, Department of Audiology and Speech Pathology, 550 Swanston Street, The University of Melbourne, VIC 3010, Australia. E-mail: jsarant@unimelb.edu.au Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Middle Ear Muscle Reflex and Word Recognition in “Normal-Hearing” Adults: Evidence for Cochlear Synaptopathy?
Objectives: Permanent threshold elevation after noise exposure, ototoxic drugs, or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of synapses between sensory cells and auditory nerve fibers. The silencing of these neurons, especially those with high thresholds and low spontaneous rates, degrades auditory processing and may contribute to difficulties in understanding speech in noise. Although cochlear synaptopathy can be diagnosed in animals by measuring suprathreshold auditory brainstem responses, its diagnosis in humans remains a challenge. In mice, cochlear synaptopathy is also correlated with measures of middle ear muscle (MEM) reflex strength, possibly because the missing high-threshold neurons are important drivers of this reflex. The authors hypothesized that measures of the MEM reflex might be better than other assays of peripheral function in predicting difficulties hearing in difficult listening environments in human subjects. Design: The authors recruited 165 normal-hearing healthy subjects, between 18 and 63 years of age, with no history of ear or hearing problems, no history of neurologic disorders, and unremarkable otoscopic examinations. Word recognition in quiet and in difficult listening situations was measured in four ways: using isolated words from the Northwestern University auditory test number six corpus with either (a) 0 dB signal to noise, (b) 45% time compression with reverberation, or (c) 65% time compression with reverberation, and (d) with a modified version of the QuickSIN. Audiometric thresholds were assessed at standard and extended high frequencies. Outer hair cell function was assessed by distortion product otoacoustic emissions (DPOAEs). Middle ear function and reflexes were assessed using three methods: the acoustic reflex threshold as measured clinically, wideband tympanometry as measured clinically, and a custom wideband method that uses a pair of click probes flanking an ipsilateral noise elicitor. Other aspects of peripheral auditory function were assessed by measuring click-evoked gross potentials, that is, summating potential (SP) and action potential (AP) from ear canal electrodes. Results: After adjusting for age and sex, word recognition scores were uncorrelated with audiometric or DPOAE thresholds, at either standard or extended high frequencies. MEM reflex thresholds were significantly correlated with scores on isolated word recognition, but not with the modified version of the QuickSIN. The highest pairwise correlations were seen using the custom assay. AP measures were correlated with some of the word scores, but not as highly as seen for the MEM custom assay, and only if amplitude was measured from SP peak to AP peak, rather than baseline to AP peak. The highest pairwise correlations with word scores, on all four tests, were seen with the SP/AP ratio, followed closely by SP itself. When all predictor variables were combined in a stepwise multivariate regression, SP/AP dominated models for all four word score outcomes. MEM measures only enhanced the adjusted r2 values for the 45% time compression test. The only other predictors that enhanced model performance (and only for two outcome measures) were measures of interaural threshold asymmetry. Conclusions: Results suggest that, among normal-hearing subjects, there is a significant peripheral contribution to diminished hearing performance in difficult listening environments that is not captured by either threshold audiometry or DPOAEs. The significant univariate correlations between word scores and either SP/AP, SP, MEM reflex thresholds, or AP amplitudes (in that order) are consistent with a type of primary neural degeneration. However, interpretation is clouded by uncertainty as to the mix of pre- and postsynaptic contributions to the click-evoked SP. None of the assays presented here has the sensitivity to diagnose neural degeneration on a case-by-case basis; however, these tests may be useful in longitudinal studies to track accumulation of neural degeneration in individual subjects. ACKNOWLEDGMENTS: The authors gratefully acknowledge Mrs. Inge Knudson for coordinating subject recruitment. The authors thank Drs. J. J. Guinan, Jr., S. G. Kujawa, and M. D. Valero for their comments on earlier versions of this manuscript. The authors also gratefully acknowledge a gift from Decibel Therapeutics for the purchase of the commercial audiometric equipment. A.M.M. and S.A.K. performed the experiments and contributed equally to this work. K.E.H. developed software for data acquisition and analysis. K.B. and V.de.G. ran the statistical analyses. M.C.L. and S.F.M. designed the study and wrote the article. S.F.M. also performed experiments and data analysis. This work was supported by the National Institutes of Health – National Institute on Deafness and Other Communication Disorders P50 DC015857 (S.F.M., Project principal investigator (PI)) and the Lauer Tinnitus Research Center at the Massachusetts Eye & Ear (S.F.M., PI). M.C.L. is a scientific founder of Decibel Therapeutics. The other authors have no conflicts of interest to declare. Received June 20, 2018; accepted August 13, 2019 Address for correspondence: Stéphane F. Maison, Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA 02114, USA. E-mail: stephane_maison@meei.harvard.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Rerouting Hearing Aid Systems for Overcoming Simulated Unilateral Hearing in Dynamic Listening Situations
Objectives: Unilateral hearing loss increases the risk of academic and behavioral challenges for school-aged children. Previous research suggests that remote microphone (RM) systems offer the most consistent benefits for children with unilateral hearing loss in classroom environments relative to other nonsurgical interventions. However, generalizability of previous laboratory work is limited because of the specific listening situations evaluated, which often included speech and noise signals originating from the side. In addition, early studies focused on speech recognition tasks requiring limited cognitive engagement. However, those laboratory conditions do not reflect characteristics of contemporary classrooms, which are cognitively demanding and typically include multiple talkers of interest in relatively diffuse background noise. The purpose of this study was to evaluate the potential effects of rerouting amplification systems, specifically a RM system and a contralateral routing of signal (CROS) system, on speech recognition and comprehension of school-age children in a laboratory environment designed to emulate the dynamic characteristics of contemporary classrooms. It was expected that listeners would benefit from the CROS system when the head shadow limits audibility (e.g., monaural indirect listening). It was also expected that listeners would benefit from the RM system only when the RM was near the talker of interest. Design: Twenty-one children (10 to 14 years, M = 11.86) with normal hearing participated in laboratory tests of speech recognition and comprehension. Unilateral hearing loss was simulated by presenting speech-shaped masking noise to one ear via an insert earphone. Speech stimuli were presented from 1 of 4 loudspeakers located at either 0°, +45°, −90°, and −135° or 0°, −45°, +90°, and +135°. Cafeteria noise was presented from separate loudspeakers surrounding the listener. Participants repeated sentences (sentence recognition) and also answered questions after listening to an unfamiliar story (comprehension). They were tested unaided, with a RM system (microphone near the front loudspeaker), and with a CROS system (ear-level microphone on the ear with simulated hearing loss). Results: Relative to unaided listening, both rerouting systems reduced sentence recognition performance for most signals originating near the ear with normal hearing (monaural direct loudspeakers). Only the RM system improved speech recognition for midline signals, which were near the RM. Only the CROS system significantly improved speech recognition for signals originating near the ear with simulated hearing loss (monaural indirect loudspeakers). Although the benefits were generally small (approximately 6.5 percentage points), the CROS system also improved comprehension scores, which reflect overall listening across all four loudspeakers. Conversely, the RM system did not improve comprehension scores relative to unaided listening. Conclusions: Benefits of the CROS system in this study were small, specific to situations where speech is directed toward the ear with hearing loss, and relative only to a RM system utilizing one microphone. Although future study is warranted to evaluate the generalizability of the findings, the data demonstrate both CROS and RM systems are nonsurgical interventions that have the potential to improve speech recognition and comprehension for children with limited useable unilateral hearing in dynamic, noisy classroom situations. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank Christine Jones, Lori Rakita, and Ora Bürkli for their insightful comments during study design. We also thank Laura Allen for consultation on the Coh-Metrix evaluation of stories used for the comprehension task. This project was funded by a grant from Sonova, AG. Portions of the project were presented at the Unilateral Hearing Loss Conference was held in Philadelphia, PA and the American Auditory Society conference was in Scottsdale, AZ sponsored by Phonak (October 22–24, 2017) and at the Scientific and Technical Conference of the American Auditory Society (March 1–3, 2018). Stimulus development for this project was supported by NIH grant P20 GM109023 (D.L.). The content of this manuscript is the responsibility and opinions of the authors and does not necessarily represent the views of the National Institutes of Health. D.L. and A.M.T. are members of the Phonak Pediatric Research Advisory Board. Received July 16, 2018; accepted July 30, 2019. Address for correspondence: Erin M. Picou, Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, 1215 21st Ave South, Room 8310, Nashville, TN 37232, USA. E-mail: erin.picou@vanderbilt.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου