Παρασκευή 8 Νοεμβρίου 2019

Effect of Stimulation Rate on Speech Understanding in Older Cochlear-Implant Users
Objectives: Cochlear implants (CIs) are considered a safe and effective intervention for more severe degrees of hearing loss in adults of all ages. Although older CI users ≥65 years of age can obtain large benefits in speech understanding from a CI, there is a growing body of literature suggesting that older CI users may not perform as well as younger CI users. One reason for this potential age-related limitation could be that default CI stimulation settings are not optimal for older CI users. The goal of this study was to determine whether improvements in speech understanding were possible when CI users were programmed with nondefault stimulation rates and to determine whether lower-than-default stimulation rates improved older CI users’ speech understanding. Design: Sentence recognition was measured acutely using different stimulation rates in 37 CI users ranging in age from 22 to 87 years. Maps were created using rates of 500, 720, 900, and 1200 pulses per second (pps) for each subject. An additional map using a rate higher than 1200 pps was also created for individuals who used a higher rate in their clinical processors. Thus, the clinical rate of each subject was also tested, including non-default rates above 1200 pps for Cochlear users and higher rates consistent with the manufacturer defaults for subjects implanted with Advanced Bionics and Med-El devices. Speech understanding performance was evaluated at each stimulation rate using AzBio and Perceptually Robust English Sentence Test Open-set (PRESTO) sentence materials tested in quiet and in noise. Results: For Cochlear-brand users, speech understanding performance using non-default rates was slightly poorer when compared with the default rate (900 pps). However, this effect was offset somewhat by age, in which older subjects were able to maintain comparable performance using a 500-pps map compared with the default rate map when listening to the more difficult PRESTO sentence material. Advanced Bionics and Med-El users showed modest improvements in their overall performance using 720 pps compared with the default rate (>1200 pps). On the individual-subject level, 10 subjects (11 ears) showed a significant effect of stimulation rate, with 8 of those ears performing best with a lower-than-default rate. Conclusions: Results suggest that default stimulation rates are likely sufficient for many CI users, but some CI users at any age can benefit from a lower-than-default rate. Future work that provides experience with novel rates in everyday life has the potential to identify more individuals whose performance could be improved with changes to stimulation rate. ACKNOWLEDGMENTS: The authors thank Advanced Bionics, Cochlear Ltd., and Med-El for testing equipment and technical support. Leo Litvak from Advanced Bionics provided helpful feedback on a previous version of this report. The authors thank the University of Maryland’s College of Behavioral & Social Sciences (BSOS) Dean’s Office for their support; Brittany N. Jaekel for her help with implementing and interpreting the multilevel analysis; and Allison Heuber, Sasha Pletnikova, Casey R. Gaskins, Kelly Miller, Lauren Wilson, and Calli M. Yancey for their help in data collection and analysis. N. N. conceived the idea for the experiment. N. N., R. H., and D. J. E. assisted in recruiting subjects. M. J. S., N. N., S.G.S., and M. J. G. designed the methods. M. J. S. collected and analyzed the data, drafted the manuscript, and prepared the figures. M. C. analyzed the retrospective data on stimulation rates presented in Figure 1. R. H., D. J. E., and S. A. provided input on study design and interpretation. S.G.S. and M. J. G. supervised the project. M. J. S., S.G.S., and M. J. G. interpreted results of the experiments. All authors edited and revised the manuscript and approved the final version. This work was supported by National Institutes of Health (NIH) grant R01-AG051603 (M. J. G.), NIH grants R01-AG09191 and R37-AG09191 (S.G.S.), NIH grant F32-DC016478 (M. J. S.), NIH grant T32-DC000046E (M. J. S.), NIH Institutional Research Grant T32-DC000046E (S.G.S.: Co-Principle Investigator with Catherine Carr), and a seed grant from the University of Maryland - College of Behavioral and Social Sciences, College Park, MD. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work met all requirements for ethical research put forth by the Institutional Review Board (IRB) of the University of Maryland. The authors have no conflicts of interest to disclose. Received March 2, 2018; accepted July 16, 2019. Address for correspondence: Maureen J. Shader, Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA. E-mail: mshader@bionicsinstitute.org Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Relationship Between Speech Recognition in Quiet and Noise and Fitting Parameters, Impedances and ECAP Thresholds in Adult Cochlear Implant Users
Objectives: The objective of this study was to identify parameters which are related to speech recognition in quiet and in noise of cochlear implant (CI) users. These parameters may be important to improve current fitting practices. Design: Adult CI users who visited the Amsterdam UMC, location VUmc, for their annual follow-up between January 2015 and December 2017 were retrospectively identified. After applying inclusion criteria, the final study population consisted of 138 postlingually deaf adult Cochlear CI users. Prediction models were built with speech recognition in quiet and in noise as the outcome measures, and aided sound field thresholds, and parameters related to fitting (i.e., T and C levels, dynamic range [DR]), evoked compound action potential thresholds and impedances as the independent variables. A total of 33 parameters were considered. Separate analyses were performed for postlingually deafened CI users with late onset (LO) and CI users with early onset (EO) of severe hearing impairment. Results: Speech recognition in quiet was not significantly different between the LO and EO groups. Speech recognition in noise was better for the LO group compared with the EO group. For CI users in the LO group, mean aided thresholds, mean electrical DR, and measures to express the impedance profile across the electrode array were identified as predictors of speech recognition in quiet and in noise. For CI users in the EO group, the mean T level appeared to be a significant predictor in the models for speech recognition in quiet and in noise, such that CI users with elevated T levels had worse speech recognition in quiet and in noise. Conclusions: Significant parameters related to speech recognition in quiet and in noise were identified: aided thresholds, electrical DR, T levels, and impedance profiles. The results of this study are consistent with previous study findings and may guide audiologists in their fitting practices to improve the performance of CI users. The best performance was found for CI users with aided thresholds around the target level of 25 dB HL, and an electrical DR between 40 and 60 CL. However, adjustments of T and/or C levels to obtain aided thresholds around the target level and the preferred DR may not always be acceptable for individual CI users. Finally, clinicians should pay attention to profiles of impedances other than a flat profile with mild variations. ACKNOWLEDGMENTS: F.d.G. and C.S. designed the study and organized and carried out the data collection. F.d.G. and B.I.L.-W. analyzed the data. All authors participated in the interpretation of the data. F.d.G. had the leading role in the writing process. All authors revised the manuscript critically for important intellectual content and approved the current version to be submitted to Ear and Hearing. The authors have no conflicts of interest to disclose. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received March 18, 2019; accepted September 18, 2019. Address for correspondence: Cas Smits, Otolaryngology - Head and Neck Surgery, Ear and Hearing, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Public Health Research Institute, PO Box 7057, 1007 MB Amsterdam, the Netherlands. E-mail: c.smits@amsterdamumc.nl This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Effects of Cognitive Load on Pure-Tone Audiometry Thresholds in Younger and Older Adults
Objectives: Cognitive load (CL) impairs listeners’ ability to comprehend sentences, recognize words, and identify speech sounds. Recent findings suggest that this effect originates in a disruption of low-level perception of acoustic details. Here, we attempted to quantify such a disruption by measuring the effect of CL (a two-back task) on pure-tone audiometry (PTA) thresholds. We also asked whether the effect of CL on PTA was greater in older adults, on account of their reduced ability to divide cognitive resources between simultaneous tasks. To specify the mechanisms and representations underlying the interface between auditory and cognitive processes, we contrasted CL requiring visual encoding with CL requiring auditory encoding. Finally, the link between the cost of performing PTA under CL, working memory, and speech-in-noise (SiN) perception was investigated and compared between younger and older participants. Design: Younger and older adults (44 in each group) did a PTA test at 0.5, 1, 2, and 4 kHz pure tones under CL and no CL. CL consisted of a visual two-back task running throughout the PTA test. The two-back task involved either visual encoding of the stimuli (meaningless images) or subvocal auditory encoding (a rhyme task on written nonwords). Participants also underwent a battery of SiN tests and a working memory test (letter number sequencing). Results: Younger adults showed elevated PTA thresholds under CL, but only when CL involved subvocal auditory encoding. CL had no effect when it involved purely visual encoding. In contrast, older adults showed elevated thresholds under both types of CL. When present, the PTA CL cost was broadly comparable in younger and older adults (approximately 2 dB HL). The magnitude of PTA CL cost did not correlate significantly with SiN perception or working memory in either age group. In contrast, PTA alone showed strong links to both SiN and letter number sequencing in older adults. Conclusions: The results show that CL can exert its effect at the level of hearing sensitivity. However, in younger adults, this effect is only found when CL involves auditory mental representations. When CL involves visual representations, it has virtually no impact on hearing thresholds. In older adults, interference is found in both conditions. The results suggest that hearing progresses from engaging primarily modality-specific cognition in early adulthood to engaging cognition in a more undifferentiated way in older age. Moreover, hearing thresholds measured under CL did not predict SiN perception more accurately than standard PTA thresholds. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank David Maidment for his help with previous versions of the PTA task, and Josh Spowage and Upasana Nathaniel for their help with data collection. Supported by grants from the Economic and Social Research Council (ES/L008300/1, ES/R004722/1) and Action on Hearing Loss (A0128005) to S.L.M. A.H. was supported by the National Institute for Health Research (NIHR) Manchester Biomedical Research Centre. M.A.F. was supported by the NIHR Nottingham Biomedical Research Centre. The authors alone are responsible for the content and writing of the paper. The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR, or the Department of Health. Parts of this work were presented at the meeting of the Psychonomics Society, Amsterdam, The Netherlands, 12 May 2018. A.H. contributed to design the experiments, performed data analysis, and contributed to write the article. M.A.F. contributed to design the experiments, provided advice on the PTA procedure, and offered feedback during the write-up. S.L.M. designed the experiments, managed data collection, contributed to analyze the data and write the article. The authors have no conflicts of interest to disclose. Address for correspondence: Sven L. Mattys, Department of Psychology, University of York, York, YO10 5DD, United Kingdom. E-mail: sven.mattys@york.ac.uk Received March 7, 2019; accepted September 10, 2019. This is an open-access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Evidence of Vowel Discrimination Provided by the Acoustic Change Complex
Objectives: The objectives of this study were to measure the effects of level and vowel contrast on the latencies and amplitudes of acoustic change complex (ACC) in the mature auditory system. This was done to establish how the ACC in healthy young adults is affected by these stimulus parameters that could then be used to inform translation of the ACC into a clinical measure for the pediatric population. Another aim was to demonstrate that a normalized amplitude metric, calculated by dividing the ACC amplitude in the vowel contrast condition by the ACC amplitude obtained in a control condition (no vowel change) would demonstrate good sensitivity with respect to perceptual measures of vowel-contrast detection. The premises underlying this research were that: (1) ACC latencies and amplitudes would vary with level, in keeping with principles of an increase in neural synchrony and activity that takes place as a function of increasing stimulus level; (2) ACC latencies and amplitudes would vary with vowel contrast, because cortical auditory evoked potentials are known to be sensitive to the spectro-temporal characteristics of speech. Design: Nineteen adults, 14 of them female, with a mean age of 24.2 years (range 20 to 38 years) participated in this study. All had normal-hearing thresholds. Cortical auditory evoked potentials were obtained from all participants in response to synthesized vowel tokens (/a/, /i/, /o/, /u/), presented in a quasi-steady state fashion at a rate of 2/sec in an oddball stimulus paradigm, with a 25% probability of the deviant stimulus. The ACC was obtained in response to the deviant stimulus. All combinations of vowel tokens were tested at 2 stimulus levels: 40 and 70 dBA. In addition, listeners were tested for their ability to detect the vowel contrasts using behavioral methods. Results: ACC amplitude varied systematically with level, and test condition (control versus contrast) and vowel token, but ACC latency did not. ACC amplitudes were significantly larger when tested at 70 dBA compared with 40 dBA and for contrast trials compared with control trials at both levels. Amplitude ratios (normalized amplitudes) were largest for contrast pairs in which /a/ was the standard token. The amplitude ratio metric at the individual level demonstrated up to 97% sensitivity with respect to perceptual measures of discrimination. Conclusions: The present study establishes the effects of stimulus level and vowel type on the latency and amplitude of the ACC in the young adult auditory system and supports the amplitude ratio as a sensitive metric for cortical acoustic salience of vowel spectral features. Next steps are to evaluate these methods in infants and children with hearing loss with the long-term goal of its translation into a clinical method for estimating speech feature discrimination. ACKNOWLEDGMENTS: The authors gratefully acknowledge their consultations with Dr. Mark Borstrom and Dr. Julia Fisher regarding statistical analyses of the data. The authors have no conflicts of interest to disclose. Received January 2, 2018; accepted August 28, 2019. Address for correspondence: Barbara Cone, Department of Speech, Language, and Hearing Sciences, The University of Arizona, 1131 E., 2nd Street, Tucson, AZ 85721, USA. E-mail: conewess@email.arizona.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Superior Canal Dehiscence Similarly Affects Cochlear Pressures in Temporal Bones and Audiograms in Patients
Objectives: The diagnosis of superior canal dehiscence (SCD) is challenging and audiograms play an important role in raising clinical suspicion of SCD. The typical audiometric finding in SCD is the combination of increased air conduction (AC) thresholds and decreased bone conduction thresholds at low frequencies. However, this pattern is not always apparent in audiograms of patients with SCD, and some have hearing thresholds that are within the normal reference range despite subjective reports of hearing impairment. In this study, we used a human temporal bone model to measure the differential pressure across the cochlear partition (PDiff) before and after introduction of an SCD. PDiff estimates the cochlear input drive and provides a mechanical audiogram of the temporal bone. We measured PDiff across a wider frequency range than in previous studies and investigated whether the changes in PDiff in the temporal bone model and changes of audiometric thresholds in patients with SCD were similar, as both are thought to reflect the same physical phenomenon. Design: We measured PDiff across the cochlear partition in fresh human cadaveric temporal bones before and after creating an SCD. Measurements were made for a wide frequency range (20 Hz to 10 kHz), which extends down to lower frequencies than in previous studies and audiograms. PDiff = PSV- PST is calculated from pressures measured simultaneously at the base of the cochlea in scala vestibuli (PSV) and scala tympani (PST) during sound stimulation. The change in PDiff after an SCD is created quantifies the effect of SCD on hearing. We further included an important experimental control—by patching the SCD, to confirm that PDiff was reversed back to the initial state. To provide a comparison of temporal bone data to clinical data, we analyzed AC audiograms (250 Hz to 8kHz) of patients with symptomatic unilateral SCD (radiographically confirmed). To achieve this, we used the unaffected ear to estimate the baseline hearing function for each patient, and determined the influence of SCD by referencing AC hearing thresholds of the SCD-affected ear with the unaffected contralateral ear. Results: PDiff measured in temporal bones (n = 6) and AC thresholds in patients (n = 53) exhibited a similar pattern of SCD-related change. With decreasing frequency, SCD caused a progressive decrease in PDiff at low frequencies for all temporal bones and a progressive increase in AC thresholds at low frequencies. SCD decreases the cochlear input drive by approximately 6 dB per octave at frequencies below ~1 kHz for both PDiff and AC thresholds. Individual data varied in frequency and magnitude of this SCD effect, where some temporal-bone ears had noticeable effects only below 250 Hz. Conclusions: We found that with decrease in frequency the progressive decrease in low-frequency PDiff in our temporal bone experiments mirrors the progressive elevation in AC hearing thresholds observed in patients. This hypothesis remains to be tested in the clinical setting, but our findings suggest that that measuring AC thresholds at frequencies below 250 Hz would detect a larger change, thus improving audiograms as a diagnostic tool for SCD. ACKNOWLEDGMENTS: This work was supported by R01 DC013303, The American Otological Society Fellowship (YSC) and the German National Academic Foundation (S.R.). Y.S.C. and X.G. performed experiments, analyzed data, and wrote the article. S.R. provided interpretive analysis, provided figures, and wrote the article. D.H.L. and C.F.H. were integral to the design of the project and collected clinical data, and provided critical revisions. H.H.N. conceptualized and performed experiments, and provided critical revisions. All authors discussed the results and implications and commented on the article at all stages. The authors have no conflicts of interest to disclose. Received August 17, 2018; accepted July 31, 2019. Address for correspondence: Hideko Heidi Nakajima, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA 02114, USA. E-mail: heidi_nakajima@meei.harvard.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Bone Conduction Stimulation Applied Directly to the Otic Capsule: Intraoperative Assessment in Humans
Objectives: Aim was to investigate the innovative method of direct acoustic bone conduction (BC) stimulation applied directly to the otic capsule and measured intraoperatively by promontory displacement in living humans. The objective was to find the best stimulation site that provides the greatest transmission of vibratory energy in a living human and compare it with the results previously obtained in cadavers. Design: The measurements were performed in 4 adult patients referred to our department for vestibular schwannoma removal via translabyrinthine approach. The measurements were performed in the operated site. The cadaver data were obtained in our previous study and here they are reanalyzed for comparison. Promontory displacement was measured using a commercial scanning laser Doppler vibrometer. The laser Doppler vibrometer points located on the promontory were used to analyze the promontory displacement amplitude. Cochlear stimulation was induced with BC stimulation through an implant positioned at three sites. The first site was on the skull surface at the squamous part of the temporal bone (BC No. 1), the second at the bone forming the ampulla of the lateral semicircular canal (BC No. 2), and the third between the superior and lateral semicircular canals (BC No. 3). BC No. 2 and BC No. 3 were located directly on the otic capsule. Four frequencies in total were tested (500, 1000, 2000, and 4000 Hz), one at a time. Results: In patients, the detailed analysis of promontory displacement amplitudes revealed the BC No. 1 magnitude to be the smallest and significantly different from BC No. 2 and No. 3 at all measured frequencies. Transmission of vibratory energy at BC No. 2 and BC No. 3 was the most effective and similar with a small and insignificant difference at 500, 1000, and 4000 Hz, and a significant difference at 2000 Hz. The results observed in cadavers were similar to those in living humans. However, a few differences were observed when comparing patients and cadavers. Small and insignificant differences were found for BC No. 1. Almost the same results were obtained for BC No. 2 and BC No. 3 in cadavers as in living humans, with only BC No. 3 measurements results at 500 Hz at the limit of statistical significance, with no other significant differences observed. Conclusions: The results of this study indicate that the promontory vibration amplitude increases when the BC stimulation location approaches the cochlea. BC No. 1 stimulation located on the squama caused overall smaller displacement than both BC No. 2 and No. 3 screwed to the ampulla of the lateral semicircular canal and to the midpoint between the semicircular canals, respectively. In our opinion, the results of BC stimulation applied directly to the otic capsule present a potential new stimulation site that could be introduced in the field of BC hearing rehabilitation. ACKNOWLEDGMENTS: This study was supported by the Polish National Center for Research and Development, grant number: PBS3/B7/25/2015. The authors have no conflicts of interest to disclose. Received March 21, 2019; accepted September 16, 2019. Address for correspondence: Magdalena Lachowska, Department of Otolaryngology, Medical University of Warsaw, ul. Banacha 1a, 02-097 Warsaw, Poland. E-mail: mlachowska@wum.edu.pl. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Evidence of a Vestibular Origin for Crossed-Sternocleidomastoid Muscle Responses to Air-Conducted Sound
Objectives: Small, excitatory surface potentials can sometimes be recorded from the contralateral sternocleidomastoid muscle (SCM) following monaural acoustic stimulation. Little is known about the physiological properties of these crossed reflexes. In this study, we sought the properties of crossed SCM responses and through comparison with other cochlear and vestibular myogenic potentials, their likely receptor origin. Design: Surface potentials were recorded from the ipsilateral and contralateral SCM and postauricular (PAM) muscles of 11 healthy volunteers, 4 patients with superior canal dehiscence and 1 with profound hearing loss. Air-conducted clicks of 105 dB nHL and tone bursts (250 to 4000 Hz) of 100 dB nHL were presented monaurally through TDH 49 headphones during head elevation. Click-evoked responses were recorded under two conditions of gaze in random order: gaze straight ahead and rotated hard toward the contralateral recording electrodes. Amplitudes (corrected and uncorrected) and latencies for crossed SCM responses were compared with vestibular (ipsilateral SCM) and cochlear (PAM) responses between groups and across the different recording conditions. Results: Surface waveforms were biphasic; positive-negative for the ipsilateral SCM, and negative-positive for the contralateral SCM and PAM. There were significant differences in the amplitudes and latencies (p = 0.004) for click responses of healthy controls across recording sites. PAM responses had the largest mean-corrected amplitudes (2.3 ± 2.8) and longest latencies (13.0 ± 1.2 msec), compared with ipsilateral (1.6 ± 0.5; 12.0 ± 0.7 msec) and contralateral (0.8 ± 0.3; 10.4 ± 1.0 msec) SCM responses. Uncorrected amplitudes and muscle activation for PAM increased by 104.4% and 46.8% with lateral gaze respectively, whereas SCM responses were not significantly affected. Click responses of patients with superior canal dehiscence followed a similar latency, amplitude, and gaze modulation trend as controls. SCM responses were preserved in the patient with profound hearing loss, yet all PAM were absent. There were significant differences in the frequency tuning of the three reflexes (p < 0.001). Tuning curves of healthy controls were flat for PAM and down sloping for ipsilateral and contralateral SCM responses. For superior canal dehiscence, they were rising for PAM and slightly down sloping for SCM responses. Conclusions: Properties of crossed SCM responses were similar, though not identical, to those of ipsilateral SCM responses and are likely to be predominantly vestibular in origin. They are unlikely to represent volume conduction from the PAM as they were unaffected by lateral gaze, were shorter in latency, and had different tuning properties. The influence of crossed vestibulo-collic pathways should be considered when interpreting cervical vestibular-evoked myogenic potentials recorded under conditions of binaural stimulation. ACKNOWLEDGMENTS: R.L.T. received support from a University of Sydney Postgraduate Award during data collection and is a currently supported by an Aotearoa Fellowship, Centre for Brain Research, The University of Auckland (from 01/06/2018). S.M.R. was supported by the National Health and Medical Research Council of Australia (GNT1104772). M.S.W. was supported by the National Health and Medical Research Council of Australia (APP1010017 and APP1063566). R.L.T. designed the experiment, collected and analyzed data, prepared the figures, and wrote the manuscript. R.W.W. collected and analyzed data and approved the final manuscript. E.C.A. collected data, assisted with the figure preparation, and approved the final version. S.M.R. collected data and provided critical revision of the manuscript draft. M.S.W. provided critical revision of the manuscript draft. The authors have no conflicts of interest to disclose. Received December 13, 2018; accepted August 28, 2019. Address for correspondence: Rachael L. Taylor, Faculty of Medical and Health Sciences, Department of Physiology, University of Auckland, 85 Park Road, Grafton, Auckland, New Zealand. E-mail: rachael.taylor@auckland.ac.nz Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
The Effect of Interphase Gap on Neural Response of the Electrically Stimulated Cochlear Nerve in Children With Cochlear Nerve Deficiency and Children With Normal-Sized Cochlear Nerves
Objectives: This study aimed to compare the effects of increasing the interphase gap (IPG) on the neural response of the electrically stimulated cochlear nerve (CN) between children with CN deficiency (CND) and children with normal-sized CNs. Design: Study participants included 30 children with CND and 30 children with normal-sized CNs. All subjects were implanted with a Cochlear Nucleus device with the internal electrode array 24RE[CA] in the test ear. The stimulus was a charge-balanced, cathodic leading, biphasic pulse with a pulse-phase duration of 50 μsec. For each subject, the electrically evoked compound action potential (eCAP) input/output (I/O) function was measured for 6 IPGs (i.e., 7, 14, 21, 28, 35, and 42 μsec) at 3 electrode locations across the electrode array. For each subject and each testing electrode, the highest stimulation used to measure the eCAP I/O function was the maximum comfortable level measured with an IPG of 42 μsec. Dependent variables (DVs) were the maximum eCAP amplitude, the eCAP threshold, and the slope of the eCAP I/O function estimated using both linear and sigmoidal regression functions. For each DV, the size of the IPG effect was defined as the proportional change relative to the result measured for the 7 μsec IPG at the basal electrode location. Generalized linear mixed effect models with subject group, electrode location, and IPG duration as the fixed effects and subject as the random effect were used to compare these DVs and the size of the IPG effect on these DVs. Results: Children with CND showed smaller maximum eCAP amplitudes, higher eCAP thresholds, and smaller slopes of eCAP I/O function estimated using either linear or sigmoidal regression function than children with normal-sized CNs. Increasing the IPG duration resulted in larger maximum eCAP amplitudes, lower eCAP thresholds and larger slopes of eCAP I/O function estimated using sigmoidal regression function at all three electrode locations in both study groups. Compared with children with normal-sized CNs, children with CND showed larger IPG effects on both the maximum eCAP amplitude and the slope of the eCAP I/O function estimated using either linear or sigmoidal regression function, and a smaller IPG effect on the eCAP threshold than those measured in children with normal-sized CNs. Conclusions: Increasing the IPG increases responsiveness of the electrically stimulated CN in both children with CND and children with normal-sized CNs. The maximum eCAP amplitude and the slope of the eCAP I/O function measured in human listeners with poorer CN survival are more sensitive to changes in the IPG. In contrast, the eCAP threshold in listeners with poorer CN survival is less sensitive to increases in the IPG. Further studies are warranted to identify the best parameters of eCAP results for predicting CN survival before this eCAP testing paradigm can be used as a clinical tool for evaluating neural health for individual cochlear implant patients. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We gratefully thank all subjects and their parents for participating in this study. We also gratefully thank all three anonymous reviewers for their helpful comments. Supported by the R01 grant from National Institute on Deafness and Other Communication Disorders (R01 DC017846) and the R01 grant from National Institute on Deafness and Other Communication Disorders and National Institute of General Medical Sciences (R01 DC016038). Portions of this project were presented at the 42nd Annual MidWinter Meeting of The Association for Research in Otolaryngology, Baltimore, MD. S.H. designed the study, participated in data collection and patient testing, drafted and approved the final version of this article. L.X. participated in data collection and patient testing, prepared the initial draft of this article, provided critical comments and approved the final version of this article. J.S. and F.-C.J. participated in data analysis, provided critical comments, and approved the final version of this article. X.C., J.F.L. and R.W. participated in data collection and patient testing, provided critical comments and approved the final version of this article. H.W. provided critical comments and approved the final version of this article. The authors have no conflicts of interest to disclose. Received April 5, 2019; accepted September 16, 2019. Address for correspondence: Shuman He, MD, PhD, Eye and Ear Institute, Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Suite 4000, Columbus, OH 43212, USA. E-mail: shuman.he@osumc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Family Environment in Children With Hearing Aids and Cochlear Implants: Associations With Spoken Language, Psychosocial Functioning, and Cognitive Development
Objectives: To examine differences in family environment and associations between family environment and key speech, language, and cognitive outcomes in samples of children with normal hearing and deaf and hard-of-hearing (DHH) children who use hearing aids and cochlear implants. Design: Thirty families of children with normal hearing (n = 10), hearing aids (n = 10), or cochlear implants (n = 10) completed questionnaires evaluating executive function, social skills, and problem behaviors. Children’s language and receptive vocabulary were evaluated using standardized measures in the children’s homes. In addition, families were administered a standardized in-home questionnaire and observational assessment regarding the home environment. Results: Family environment overall was similar across hearing level and sensory aid, although some differences were found on parental responsivity and physical environment. The level of supportiveness and enrichment within family relationships accounted for much of the relations between family environment and the psychosocial and neurocognitive development of DHH children. In contrast, the availability of objects and experiences to stimulate learning in the home was related to the development of spoken language. Conclusions: Whereas broad characteristics of the family environments of DHH children may not differ from those of hearing children, variability in family functioning is related to DHH children’s at-risk speech, language, and cognitive outcomes. Results support the importance of further research to clarify and explain these relations, which might suggest novel methods and targets of family-based interventions to improve developmental outcomes. ACKNOWLEDGMENTS: K.L. is currently at Boys Town National Research Hospital, Omaha, Nebraska, USA. This research was funded by the Indiana University Collaborative Research Grant (R.F.H. & W.G.K.), NIH-NIDCD R01014956 (R.F.H. and D.B.P.), and the NIH-NIDCD (T3200011 to D.B.P.). Portions of this study were presented at the American Auditory Society Meeting, Scottsdale, AZ (March 2013) and the CI 2013 Symposium (October 2013). R.F.H. and W.G.K. designed experiments, analyzed data, and wrote the paper; J.B. and D.B.P. designed experiments, provided interpretive analysis and critical revision; K.L. collected data, provided interpretive analysis, and critical revision; and L.M. collected data and provided critical revision. The authors have no conflicts of interest to disclose. Received February 13, 2019; accepted September 3, 2019. Address for correspondence: Rachael Frush Holt, Department of Speech and Hearing Science, The Ohio State University, 110 Pressey Hall, 1070 Carmack Road, Columbus, OH 43210, USA. E-mail: holt.339@osu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Wideband Acoustic Immittance in Cochlear Implant Recipients: Reflectance and Stapedial Reflexes
Objectives: This study aims to characterize differences in wideband power reflectance for ears with and without cochlear implants (CIs), to describe electrically evoked stapedial reflex (eSR)-induced changes in reflectance, and to evaluate the benefit of a broadband probe for reflex threshold determination for CI recipients. It was hypothesized that reflectance patterns in ears with CIs would be consistent with increased middle ear stiffness and that reflex thresholds measured with a broadband probe would be lower compared with thresholds obtained with a single-frequency probe. Design: Eleven CI recipients participated in both wideband reflectance and eSR testing. Ipsilateral reflexes were measured with three probes: a broadband chirp (swept from 200 to 8000 Hz), a 226 Hz tone, and a 678 Hz tone. Wideband reflectance measures acquired from 28 adults without CIs and with normal middle ear function served as a normative data set for comparison. Results: Considering the group data, average reflectance was significantly greater for ears with CIs across 250 to 891 Hz and 4238 to 4490 Hz compared with the normative data set, although individual reflectance curves were variable. Some CI recipients also had low 226 Hz admittance, which contributed to the group finding, considering the control group had clinically normal 226 Hz admittance by design. Electrically evoked stapedial reflexes were measurable in nine of 14 ears (64.3%) and in 24 of 46 electrodes (52.5%) tested. Reflex-induced changes in reflectance patterns were unique to the participant/ear, but similar across activators (electrodes) within a given ear. In addition, reflectance values at or above 1000 Hz were affected most by activating the stapedial reflex, even in ears with clinically normal 226 Hz admittance. This is a higher-frequency range than has been reported for acoustically evoked reflex-induced reflectance changes and is consistent with increased middle ear stiffness at rest. Electrically evoked reflexes could be measured more often with the 678 Hz or the broadband probe compared with the 226 Hz probe tone. Although reflex thresholds were lower with the broadband probe compared with the 678 Hz probe in 16 of 24 conditions, this was not a statistically significant finding (Wilcoxon signed-rank test; p = 0.072). Conclusions: The applications of wideband acoustic immittance measurements (reflectance and reflexes) should also be considered for ears with CIs. Further work is needed to describe changes across time in ears with CIs to more fully understand the reflectance pattern indicating increased middle ear stiffness and to optimize measuring eSRs with a broadband probe. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Susan Voss, Smith College, and members of her laboratory (Katie Fairbank and Lauren Tinglin) for estimating ear-canal areas, and Shawn Goodman for assistance with the Auditory Research Lab audio software (ARLas, MATLAB), which provided the basic framework for the ear-canal recordings and reflectance measurements. The authors also acknowledge James D. Lewis, University of Tennessee Health Science Center, for assistance during the project development stage. This work was supported by Montclair State University by start-up funds and an internal grant (FY2018 Separately Budgeted Research Internal Award; R.A.S.). Preliminary results were presented at the 2018 annual meeting of the American Auditory Society meeting, Scottsdale, AZ, USA and awarded a New Investigator Travel award. Data were collected at Montclair State University by R.A.S. Received June 19, 2019; accepted August 28, 2019. The authors have no conflicts of interest to declare. Address for correspondence: Rachel A. Scheperle, St. Louis Children’s Hospital, Audiology, 1 Children’s Place, Saint Louis, MO 63110, USA. E-mail: rachel.scheperle@bjc.org Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου