Πέμπτη 1 Αυγούστου 2019

A Cross-Sectional Study of the Prevalence and Factors Associated With Tinnitus and/or Hyperacusis in Children
Objectives: The aim of this study was to determine the prevalence of tinnitus and/or hyperacusis in Danish children aged 10 to 16 years, and to assess associations between tinnitus or hyperacusis and other relevant factors. Design: A cross-sectional study based on a previously established child cohort. A total of 501 children were enrolled in the project. The study was performed in eight mainstream schools and data were collected during an 8-week period from October 27, 2014 to December 16, 2014. Results: Using broad tinnitus research questions, the prevalence of any tinnitus was 66.9%; of noise-induced tinnitus (NIT) was 35.7%; and of spontaneous tinnitus (ST) was 53.7%. Bothersome tinnitus was reported by 34.6% of the children with any tinnitus, 23.2% of the whole population. Few children were severely bothered (2.4%, 1.6%, respectively). It was significantly more common for children with NIT to report tinnitus episodes lasting for minutes or longer than for children with ST (p = 0.01). Girls were more likely than boys to be bothered by tinnitus [Odds ratio (OR) = 2.96; 95% confidence interval (CI) 1.34 to 6.51; p = 0.01]. 14.6% of the children reported hyperacusis, and 72.6% of those reporting hyperacusis were bothered by it, 10.6% of the whole population. The odds of having hyperacusis were 4.73 (1.57, 14.21) times higher among those with ST compared with those without ST. Furthermore, hyperacusis was associated with sound avoidance behaviors such as experience of sound-induced pain in the ear (OR = 2.95, 95% CI 1.65 to 5.27; p < 0.001), withdrawal from places or activities (OR = 3.33; 95% CI 1.44 to 7.69; p = 0.01), or concerns about sound could damage the hearing (OR = 1.85, 95% CI 1.06 to 3.31; p = 0.03). Conclusions: Tinnitus and hyperacusis are common in children but prevalence is dependent on tinnitus definitions. Only a few children are severely bothered by tinnitus. In the case of hyperacusis, children may exhibit sound avoidance behavior. ACKNOWLEDGMENTS: First, we thank all the children and families participating in this study. We are also extremely grateful for the accept from the ALSPAC-study for collaboration in terms of providing their questionnaire and study protocol for us to use and their additional help. We also thank The Municipality of Svendborg and the Svendborg Project for including us in their project. SDE College Odense kindly participated with final-year students that provided all hearing measurements. A special thanks to technician Arne Hutflesz for his support and ongoing technical assistant. Rachel Humphriss and Amanda Hall were generous in sharing the protocol and definitions used in Humphriss et al. (2016). The present publication is the work of the authors, and Susanne Nemholt will serve as guarantor for the contents of this article. This study is part of the Ph.D. project Tinnitus and Hyperacusis Among Children and Adolescents in Denmark (THACAD), which has been funded by The Capital Region of Denmark, The University of Southern Denmark and The Danish Association of the Hard of Hearing. This particular study was additional funded by Oticon Fonden and GN Store Nord Fondet. This report is independent research, and David Baguley’s involvement is funded by the National Institute for Health Research. The views expressed in this publication are those of the authors, and not necessarily those of the NHS, the National Institute for Health Research, nor the UK Department of Health. The authors have no conflicts of interest to disclose. Received August 5, 2016; accepted May 20, 2019. Address for correspondence: Susanne Nemholt, Syddansk Universitet, Campusvej 55, DK-5230 Odense, Denmark. E-mail: snemholt@health.sdu.dk This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Masking Release for Speech-in-Speech Recognition Due to a Target/Masker Sex Mismatch in Children With Hearing Loss
Objectives: The goal of the present study was to compare the extent to which children with hearing loss and children with normal hearing benefit from mismatches in target/masker sex in the context of speech-in-speech recognition. It was hypothesized that children with hearing loss experience a smaller target/masker sex mismatch benefit relative to children with normal hearing due to impairments in peripheral encoding, variable access to high-quality auditory input, or both. Design: Eighteen school-age children with sensorineural hearing loss (7 to 15 years) and 18 age-matched children with normal hearing participated in this study. Children with hearing loss were bilateral hearing aid users. Severity of hearing loss ranged from mild to severe across participants, but most had mild to moderate hearing loss. Speech recognition thresholds for disyllabic words presented in a two-talker speech masker were estimated in the sound field using an adaptive, forced-choice procedure with a picture-pointing response. Participants were tested in each of four conditions: (1) male target speech/two-male-talker masker; (2) male target speech/two-female-talker masker; (3) female target speech/two-female-talker masker; and (4) female target speech/two-male-talker masker. Children with hearing loss were tested wearing their personal hearing aids at user settings. Results: Both groups of children showed a sex-mismatch benefit, requiring a more advantageous signal to noise ratio when the target and masker were matched in sex than when they were mismatched. However, the magnitude of sex-mismatch benefit was significantly reduced for children with hearing loss relative to age-matched children with normal hearing. There was no effect of child age on the magnitude of sex-mismatch benefit. The sex-mismatch benefit was larger for male target speech than for female target speech. For children with hearing loss, the magnitude of sex-mismatch benefit was not associated with degree of hearing loss or aided audibility. Conclusions: The findings from the present study indicate that children with sensorineural hearing loss are able to capitalize on acoustic differences between speech produced by male and female talkers when asked to recognize target words in a competing speech masker. However, children with hearing loss experienced a smaller benefit relative to their peers with normal hearing. No association between the sex-mismatch benefit and measures of unaided thresholds or aided audibility were observed for children with hearing loss, suggesting that reduced peripheral encoding is not the only factor responsible for the smaller sex-mismatch benefit relative to children with normal hearing. ACKNOWLEDGMENTS: This research was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under award number R01 DC011038 (L. J. L.) and subject recruitment was supported by the National Institute of General Medical Sciences of the National Institutes of Health under award number P20GM109023. The authors have no conflicts of interest to disclose. Received June 28, 2018; accepted May 1, 2019. Address for correspondence: Lori J. Leibold, Center for Hearing Research, Boys Town National Research Hospital, 555 North 30th Street, Omaha, NE 68131, USA. E-mail: lori.leibold@boystown.org Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Barriers and Facilitators to Cochlear Implant Uptake in Australia and the United Kingdom
Objectives: Hearing loss (HL) affects a significant proportion of adults aged >50 years by impairing communication and social connectedness and, due to its high prevalence, is a growing global concern. Cochlear implants (CIs) are effective devices for many people with severe or greater sensorineural HL who experience limited benefits from hearing aids. Despite this, uptake rates globally are low among adults. This multimethod, multicountry qualitative study aimed to investigate the barriers and facilitators to CI uptake among adults aged ≥50 years. Design: Adult CI and hearing aid users with postlingual severe or greater sensorineural HL, general practitioners, and audiologists were recruited in Australia using purposive sampling, and a comparative sample of audiologists was recruited in England and Wales in the United Kingdom. Participants were interviewed individually, or in a focus group, completed a demographic questionnaire and a qualitative survey. Data were analyzed using thematic analysis. Results: A total of 143 data capture events were collected from 55 participants. The main barriers to CI uptake related to patients’ concerns about surgery and loss of residual hearing. Limited knowledge of CIs, eligibility criteria, and referral processes acted as barriers to CIs assessment referrals by healthcare professionals. Facilitators for CI uptake included patients’ desire for improved communication and social engagement, and increased healthcare professional knowledge and awareness of CIs. Conclusions: There are numerous complex barriers and facilitators to CI uptake. Knowledge of these can inform the development of targeted strategies to increase CI referral and surgery for potential beneficiaries. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank all of the participants who took part in this study, and the Macquarie University-Cochlear Ltd Partnership (MQ-Cochlear), for funding this research. M.R.B. designed the study, performed the data collection, analyzed and interpreted the data and wrote the paper; C.M.M., I.B., and F.R. designed the study, provided guidance for data collection, interpretive analysis, and commented on the manuscript at all stages. S.H. designed the study and performed the data collection in the United Kingdom, provided interpretive analysis and commented on the manuscript at all stages; A.Y.S.L. and J.B. designed the study and commented on the manuscript at all stages. All authors discussed the results and implications and commented on the manuscript at all stages. All authors provided final approval of the manuscript submitted. The authors would like to disclose that another manuscript has been prepared reporting results from this study, not reported in this manuscript, which will be submitted to another journal. The authors received financial support for the study from the Macquarie University-Cochlear Ltd Partnership (MQ-Cochlear). Authors had full control over the study design, conduct, and analysis. The authors have no conflicts of interest to disclose. Received September 13, 2018; accepted May 27, 2019. Address for correspondence: Mia Bierbaum, Australian Institute of Health Innovation, Macquarie University, Level 6, 75 Talavera Road, Macquarie University, NSW, 2109. E-mail: Mia.bierbaum@mq.edu.au This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Effects of Simulated and Profound Unilateral Sensorineural Hearing Loss on Recognition of Speech in Competing Speech
Objectives: Unilateral hearing loss (UHL) is a condition as common as bilateral hearing loss in adults. Because of the unilaterally reduced audibility associated with UHL, binaural processing of sounds may be disrupted. As a consequence, daily tasks such as listening to speech in a background of spatially distinct competing sounds may be challenging. A growing body of subjective and objective data suggests that spatial hearing is negatively affected by UHL. However, the type and degree of UHL vary considerably in previous studies. The aim here was to determine the effect of a profound sensorineural UHL, and of a simulated UHL, on recognition of speech in competing speech, and the binaural and monaural contributions to spatial release from masking, in a demanding multisource listening environment. Design: Nine subjects (25 to 61 years) with profound sensorineural UHL [mean pure-tone average (PTA) across 0.5, 1, 2, and 4 kHz = 105 dB HL] and normal contralateral hearing (mean PTA = 7.2 dB HL) were included based on the criterion that the target and competing speech were inaudible in the ear with hearing loss. Thirteen subjects with normal hearing (19 to 60 years; mean left PTA = 4.1 dB HL; mean right PTA = 5.5 dB HL) contributed data in normal and simulated “mild-to-moderate” UHL conditions (PTA = 38.6 dB HL). The main outcome measure was the threshold for 40% correct speech recognition in colocated (0°) and spatially and symmetrically separated (±30° and ±150°) competing speech conditions. Spatial release from masking was quantified as the threshold difference between colocated and separated conditions. Results: Thresholds in profound UHL were higher (worse) than normal hearing in separated and colocated conditions, and comparable to simulated UHL. Monaural spatial release from masking, that is, the spatial release achieved by subjects with profound UHL, was significantly different from zero and 49% of the magnitude of the spatial release from masking achieved by subjects with normal hearing. There were subjects with profound UHL who showed negative spatial release, whereas subjects with normal hearing consistently showed positive spatial release from masking in the normal condition. The simulated UHL had a larger effect on the speech recognition threshold for separated than for colocated conditions, resulting in decreased spatial release from masking. The difference in spatial release between normal-hearing and simulated UHL conditions increased with age. Conclusions: The results demonstrate that while recognition of speech in colocated and separated competing speech is impaired for profound sensorineural UHL, spatial release from masking may be possible when competing speech is symmetrically distributed around the listener. A “mild-to-moderate” simulated UHL decreases spatial release from masking compared with normal-hearing conditions and interacts with age, indicating that small amounts of residual hearing in the UHL ear may be more beneficial for separated than for colocated interferer conditions for young listeners. ACKNOWLEDGMENTS: The authors are grateful to Maria Drott, Malin Apler, Jenny Andersson, Linda Persson, and Ann-Charlotte Persson for assistance in measurements; Per-Olof Larsson for technical assistance; and the subjects for participating. This work was supported by the Hasselblad Foundation. Part of this work was previously presented as an oral article at the 6th International Congress on Bone Conduction Hearing and Related Technologies, Nijmegen, The Netherlands, May 17–20, 2017. The authors have no conflicts of interest to declare. Received March 27, 2018; accepted May 28, 2019. Address for correspondence: Filip Asp, Department of ENT, Section of Hearing Implants, Karolinska University Hospital Huddinge, 141 86 Stockholm, Sweden. E-mail: filip.asp@ki.se This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
AVATAR Assesses Speech Understanding and Multitask Costs in Ecologically Relevant Listening Situations
Objectives: There is a high need among clinicians and researchers for an ecologically valid measure of auditory functioning and listening effort. Therefore, we developed AVATAR: an “Audiovisual True-to-life Assessment of Auditory Rehabilitation” which takes important characteristics of real-life listening situations into account, such as multimodal speech presentation, spatial separation of sound sources and multitasking. As such, AVATAR aims to assess both auditory functioning and the amount of allocated processing resources during listening in a realistic yet controllable way. In the present study, we evaluated AVATAR and investigated whether speech understanding in noise and multitask costs during realistic listening environments changed with increasing task complexity. Design: Thirty-five young normal-hearing participants performed different task combinations of an auditory-visual speech-in-noise task and three secondary tasks on both auditory localization and visual short-term memory in a simulated restaurant environment. Tasks were combined in increasing complexity and multitask costs on the secondary tasks were investigated as an estimate of the amount of cognitive resources allocated during listening and multitasking. In addition to behavioral measures of auditory functioning and effort, working memory capacity and self-reported hearing difficulties were established using a reading span test and a questionnaire on daily hearing abilities. Results: Whereas performance on the speech-in-noise task was not affected by task complexity, multitask costs on one of the secondary tasks became significantly larger with increasing task complexity. Working memory capacity correlated significantly with multitask costs, but no association was observed between behavioral outcome measures and self-reported hearing abilities or effort. Conclusions: AVATAR proved to be a promising model to assess speech intelligibility and auditory localization abilities and to gauge the amount of processing resources during effortful listening in ecologically relevant multitasking situations by means of multitask costs. In contrast with current clinical measures of auditory functioning, results showed that listening and multitasking in challenging listening environments can require a considerable amount of processing resources, even for young normal-hearing adults. Furthermore, the allocation of resources increased in more demanding listening situations. These findings open avenues for a more realistic assessment of auditory functioning and individually tuned auditory rehabilitation for individuals of different age and hearing profiles. ACKNOWLEDGMENTS: The authors thank professor Ralf Krampe for sharing interesting insights and Alexander Dudek for his technical support. This research project was supported with grants from the Oticon Foundation (Oticon Fonden), the Research Council of KU Leuven through project 0T/12/98 and a TBM-FWO Grant from the Research Foundation-Flanders (T002216N). Received August 4, 2018; accepted June 10, 2019. Address for correspondence: Annelies Devesse, KU Leuven Department of Neurosciences, ExpORL Herestraat 49 Bus 721, B-3000 Leuven, Belgium. E-mail: annelies.devesse@kuleuven.be Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Limitations of the Envelope Difference Index as a Metric for Nonlinear Distortion in Hearing Aids
Objectives: The envelope difference index (EDI) compares the envelopes of two signals. It has been used to measure nonlinear distortion in hearing aids, but it also responds to linear processing. This article compares linear and nonlinear processing effects on the EDI. Design: The EDI for spectral tilt and peak clipping distortion is computed to illustrate the effects of linear and nonlinear signal modifications. The EDI for wide dynamic-range compression is then compared with that obtained for linear amplification for a set of standard audiograms to show the expected range of EDI values for linear and nonlinear hearing aid processing. The EDI for hearing aid amplification and compression is also compared with a measure of time-frequency envelope modulation distortion for the same conditions. Results: The EDI is shown to be as sensitive to linear amplification as it is to nonlinear processing. The EDI values for spectral tilt can exceed those for peak clipping, and the EDI values for linear amplification exceed those for wide dynamic-range compression for four of the nine audiograms considered. The agreement of the EDI with a nonlinear envelope distortion measure is shown to depend on the long-term spectra of the signals being compared when computing the EDI. Conclusions: The accuracy of the EDI as an indicator of nonlinear distortion for sentence materials can be improved by equalizing the long-term spectrum of the processed signal to match that of the unprocessed input. However, the EDI does not have a clear interpretation because of the confound between linear and nonlinear processing effects and the lack of an auditory model in calculating the signal differences. ACKNOWLEDGMENTS: The research reported in this article was supported by a grant from GN ReSound to the University of Colorado and by a grant from the National Institutes of Health (R01 DC012289). The author has no conflicts of interest to disclose. Received December 4, 2018; accepted May 27, 2019. Address for correspondence: James M. Kates, University of Colorado, 409 UCB, Boulder, CO 80309, USA. E-mail: james.kates@colorado.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Hearing Impairment and Prevalence of Mild Cognitive Impairment in Japan: Baseline Data From the Aidai Cohort Study in Yawatahama and Uchiko
Objectives: Hearing impairment (HI) in midlife may increase the risk of dementia. However, epidemiological research on the association between HI and mild cognitive impairment (MCI) is very limited. Design: The present cross-sectional study investigated the relationship between HI and MCI using baseline data from the Aidai Cohort Study. Study subjects were 995 Japanese adults aged 36 to 84 years. We used the audiometric definition of HI adopted by the World Health Organization, which identifies the speech-frequency pure-tone average hearing thresholds at 0.5, 1, 2, and 4 kHz tones. HI was defined as present when pure-tone average was >25 dB HL in the better hearing ear. MCI was defined as being present when a subject had a Japanese version of the Montreal Cognitive Assessment score of <26. Adjustment was made for age, sex, smoking status, alcohol consumption, leisure time physical activity, hypertension, dyslipidemia, diabetes mellitus, history of depression, body mass index, waist circumference, employment, education, and household income. Results: Among the 995 study subjects, the prevalence values of HI and MCI were 24.3% and 44.5%, respectively. HI was independently positively associated with MCI: the multivariate-adjusted odds ratio (95% confidence interval) was 1.86 (1.32 to 2.62). HI was independently related to a higher prevalence of MCI in those aged 60 to 69 years and those aged 70 years or older: the multivariate-adjusted odds ratios (95% confidence intervals) were 1.64 (1.03 to 2.62) and 2.30 (1.04 to 5.27), respectively. Conclusions: HI may be associated with a higher prevalence of MCI. ACKNOWLEDGMENTS: The authors thank the Yawatahama City Government, the Uchiko Town Government, and the Ehime Prefecture Medical Association for their valuable support. This work was supported by the Research Unit of Ehime University. Y.M. and K.T. contributed to the study concept and design and the data acquisition. H.S., T.N., and B.M. contributed to the data acquisition. K.T., S.O., H.S., Y.F., and T.N. contributed to the cognitive assessment. H.S., M.O., D.T., M.T., and N.H. contributed to the audiometric hearing assessment. Y.M. was responsible for the analysis and interpretation of data and the drafting of the article. All authors read and approved the final article. The authors have no conflicts of interest to disclose. Received October 24, 2018; accepted May 31, 2019. Address for correspondence: Yoshihiro Miyake, Department of Epidemiology and Preventive Medicine, Ehime University Graduate School of Medicine, Ehime 791-0295, Japan. E-mail: miyake.yoshihiro.ls@ehime-u.ac.jp Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Relationship Between Diet, Tinnitus, and Hearing Difficulties
Objectives: Diet may affect susceptibility of the inner ear to noise and age-related effects that lead to tinnitus and hearing loss. This study used complementary single nutrient and dietary pattern analysis based on statistical grouping of usual dietary intake in a cross-sectional analysis of tinnitus and hearing difficulties in a large population study sample. Design: The research was conducted using the UK Biobank resource. Tinnitus was based on report of ringing or buzzing in one or both ears that lasts more than five minutes at a time and is currently experienced at least some of the time. Identification of a hearing problem was based on self-reported difficulties with hearing. Usual dietary intake and dietary patterns (involving statistical grouping of intake to account for how foods are combined in real-life diets) were estimated based on between two and five administrations of the Oxford Web-Q 24-hour dietary recall questionnaire over the course of a year for 34,576 UK adult participants aged 40 to 69. Results: In a multivariate model, higher intake of vitamin B12 was associated with reduced odds of tinnitus, while higher intakes of calcium, iron, and fat were associated with increased odds (B12, odds ratio [OR] 0.85, 95% confidence interval [CI] 0.75 to 0.97; Calcium, OR 1.20, 95% CI 1.08 to 1.34; Iron, OR 1.20, 95% CI 1.05 to 1.37; Fat, OR 1.33, 95% CI 1.09 to 1.62, respectively, for quintile 5 versus quintile 1). A dietary pattern characterised by high protein intake was associated with reduced odds of tinnitus (OR 0.90, 95% CI 0.82 to 0.99 for quintile 5 versus quintile 1). Higher vitamin D intake was associated with reduced odds of hearing difficulties (OR 0.90, 95% CI 0.81 to 1.00 for quintile 5 versus quintile 1), as were dietary patterns high in fruit and vegetables and meat and low in fat (Prudent diet: OR 0.89, 95% CI 0.83 to 0.96; High protein: OR 0.88, 95% CI 0.82 to 0.95; High fat: OR 1.16, 95% CI 1.08 to 1.24, respectively, for quintile 5 versus quintile 1). Conclusions: There were associations between both single nutrients and dietary patterns with tinnitus and hearing difficulties. Although the size of the associations was small, universal exposure for dietary factors indicates that there may be a substantial impact of diet on levels of tinnitus and hearing difficulties in the population. This study showed that dietary factors might be important for hearing health. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Avni Vayas for assisting with interpretation of the factors in the dietary pattern analysis. This research was conducted with the UK Biobank resource and supported by the NIHR Manchester Biomedical Research Centre. Received August 1, 2017; accepted May 21, 2019. The authors have no conflicts of interest to disclose. Address for correspondence: Piers Dawes, Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK. E-mail: piers.dawes@manchester.ac.uk This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition
Objectives: In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s). Design: Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants’ gaze to different parts of a talker’s face during SIN perception. Results: As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker’s face had little effect on speech recognition by itself. Listeners’ eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker. Conclusions: Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception. Acknowledgments: This work was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under award number NIH/NIDCD R01DC016267 (G. M. B.). The authors have no conflicts of interest to disclose. Received April 3, 2018; accepted May 3, 2019. Address for correspondence: Gavin M. Bidelman, PhD, School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA. E-mail: gmbdlman@memphis.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Psychometric Properties of Cognitive-Motor Dual-Task Studies With the Aim of Developing a Test Protocol for Persons With Vestibular Disorders: A Systematic Review
Objectives: Patients suffering from vestibular disorders (VD) often present with impairments in cognitive domains such as visuospatial ability, memory, executive function, attention, and processing speed. These symptoms can be attributed to extensive vestibular projections throughout the cerebral cortex and subcortex on the one hand, and to increased cognitive-motor interference (CMI) on the other hand. CMI can be assessed by performing cognitive-motor dual-tasks (DTs). The existing literature on this topic is scarce and varies greatly when it comes to test protocol, type and degree of vestibular impairment, and outcome. To develop a reliable and sensitive test protocol for VD patients, an overview of the existing reliability and validity studies on DT paradigms will be given in a variety of populations, such as dementia, multiple sclerosis, Parkinson’s disease, stroke, and elderly. Design: The systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. An extensive literature search on psychometric properties of cognitive-motor DTs was run on MEDLINE, Embase, and Cochrane Databases. The studies were assessed for eligibility by two independent researchers, and their methodological quality was subsequently evaluated using the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN). Results and Conclusions: Thirty-three studies were included in the current review. Based on the reliability and validity calculations, including a static as well as dynamic motor task seems valuable in a DT protocol for VD patients. To evoke CMI maximally in this population, both motor tasks should be performed while challenging the vestibular cognitive domains. Out of the large amount of cognitive tasks employed in DT studies, a clear selection for each of these domains, except for visuospatial abilities, could be made based on this review. The use of the suggested DTs will give a more accurate and daily life representation of cognitive and motor deficiencies and their interaction in the VD population. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: All authors substantially contributed to the article. Their role can be summarized as follows: M. D.: literature search, screening of abstracts, quality assessment, drafting of the initial article, and improving revised versions. L. M.: supervision, third reviewer on screening of abstracts, assisting in the interpretation of literature findings, critically reviewing, and revising the article. R. V. H.: screening of abstracts, quality assessment, and critically reviewing, and revising the article. H. K., S. D., D. C., R. v. d. B., and V. V. R.: critically reviewing and revising the article. All authors approved the final article as submitted and are accountable for all aspects of the study. The authors have no conflicts of interest to disclose. Received November 28, 2018; accepted April 12, 2019. Address for correspondence: Maya Danneels, Faculty of Medicine and Health Sciences, Department of Rehabilitation Sciences, Ghent University, (2P1), Corneel Heymanslaan 10, Ghent, Belgium. E-mail: maya.danneels@ugent.be Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου