Πέμπτη 10 Οκτωβρίου 2019

Neural network methodology for real-time modelling of bio-heat transfer during thermo-therapeutic applications
Publication date: November 2019
Source: Artificial Intelligence in Medicine, Volume 101
Author(s): Jinao Zhang, Sunita Chauhan
Abstract
Real-time simulation of bio-heat transfer can improve surgical feedback in thermo-therapeutic treatment, leading to technical innovations to surgical process and improvements to patient outcomes; however, it is challenging to achieve real-time computational performance by conventional methods. This paper presents a cellular neural network (CNN) methodology for fast and real-time modelling of bio-heat transfer with medical applications in thermo-therapeutic treatment. It formulates nonlinear dynamics of the bio-heat transfer process and spatially discretised bio-heat transfer equation as the nonlinear neural dynamics and local neural connectivity of CNN, respectively. The proposed CNN methodology considers three-dimensional (3-D) volumetric bio-heat transfer behaviour in tissue and applies the concept of control volumes for discretisation of the Pennes bio-heat transfer equation on 3-D irregular grids, leading to novel neural network models embedded with bio-heat transfer mechanism for computation of tissue temperature and associated thermal dose. Simulations and comparative analyses demonstrate that the proposed CNN models can achieve good agreement with the commercial finite element analysis package, ABAQUS/CAE, in numerical accuracy and reduce computation time by 304 and 772.86 times compared to those of with and without ABAQUS parallel execution, far exceeding the computational performance of the commercial finite element codes. The medical application is demonstrated using a high-intensity focused ultrasound (HIFU)-based thermal ablation of hepatic cancer for prediction of tissue temperature and estimation of thermal dose.

Study on miR-384-5p activates TGF-β signaling pathway to promote neuronal damage in abutment nucleus of rats based on deep learning
Publication date: Available online 10 October 2019
Source: Artificial Intelligence in Medicine
Author(s): Zhen Wang, Xiaoyan Du, Yang Yang, Guoqing Zhang
Abstract
Background
Any ailment in our organs can be visualized by using different modality signals and images. Hospitals are encountering a massive influx of large multimodality patient data to be analysed accurately and with context understanding. The deep learning techniques, like convolution neural networks (CNN), long short-term memory (LSTM), autoencoders, deep generative models and deep belief networks have already been applied to efficiently analyse possible large collections of data. Application of these methods to medical signals and images can aid the clinicians in clinical decision making.
Purpose
The aim of this study was to explore its potential application mechanism to the abalone basal ganglia neurons in rats based on deep learning.
Patients and methods
Firstly, in the GEO database, we obtained data on rat anesthesia, performing differential analysis, co-expression analysis, and enrichment analysis, and then we received the relevant module genes. Besides, the potential regulation of multi-factors on the module was calculated by hypergeometric test, and a series of ncRNA and TF were identified. Finally, we screened the target genes of anesthetized rats to gain insight into the potential role of anesthesia in rat basal lateral nucleus neurons.
Results
A total of 535 differentially expressed genes in rats were obtained, involving Mafb and Ryr2. These genes are clustered into 17 anesthesia-related expression disorder modules. At the same time, the biological processes favored by the module are regulation of neuron apoptotic process and transforming growth factor beta2 production. Pivot analysis found that 39 ncRNAs and 4 TFs drive anesthesia-related disorders. Finally, the mechanism of action was analyzed and predicted. The module was regulated by Acvr1. We believe that miR-384-5p in anesthetized rats can activate the TGF-beta signaling pathway. Further, it promotes anesthesia and causes exposure to the basal ganglia neuron damage of the amygdala.
Conclusion
In this study, the imbalance module was used to explore the multi-factor-mediated anesthesia application mechanism, which provided new methods and ideas for subsequent research. The results suggest that miR-384-5p can promote anesthesia damage to the abalone basal ganglia neurons in rats through a variety of biological processes and signaling pathways. This result lays a solid theoretical foundation for biologists to explore the application mechanism of anesthesiology further.

Cosine Similarity Measures of Bipolar Neutrosophic Set for Diagnosis of Bipolar Disorder Diseases
Publication date: Available online 5 October 2019
Source: Artificial Intelligence in Medicine
Author(s): Mohamed Abdel-Basset, Mai Mohamed, Mohamed Elhoseny, Le Hoang Son, Francisco Chiclana, Abd El-Nasser H. Zaied
Abstract
Similarity plays a significant implicit or explicit role in various fields. In some real applications in decision making, similarity may bring counterintuitive outcomes from the decision maker’s standpoint. Therefore, in this research, we propose some novel similarity measures for bipolar and interval-valued bipolar neutrosophic set such as the cosine similarity measures and weighted cosine similarity measures. The propositions of these similarity measures are examined, and two multi-attribute decision making techniques are presented based on proposed measures. For verifying the feasibility of proposed measures, two numerical examples are presented in comparison with the related methods for demonstrating the practicality of the proposed method. Finally, we applied the proposed measures of similarity for diagnosing bipolar disorder diseases.

Automatic Detection of Epileptic Seizure Based on Approximate Entropy, Recurrence Quantification Analysis and Convolutional Neural Networks
Publication date: Available online 7 September 2019
Source: Artificial Intelligence in Medicine
Author(s): Xiaozeng Gao, Xiaoyan Yan, Ping Gao, Xiujiang Gao, Shubo Zhang
Abstract
Epilepsy is the most common neurological disorder in humans. Electroencephalogram is a prevalent tool for diagnosing the epileptic seizure activity in clinical, which provides valuable information for understanding the physiological mechanisms behind epileptic disorders. Approximate entropy and recurrence quantification analysis are nonlinear analysis tools to quantify the complexity and recurrence behaviors of non-stationary signals, respectively. Convolutional neural networks are powerful class of models. In this paper, a new method for automatic epileptic electroencephalogram recordings based on the approximate entropy and recurrence quantification analysis combined with a convolutional neural network were proposed. The Bonn dataset was used to assess the proposed approach. The results indicated that the performance of the epileptic seizure detection by approximate entropy and recurrence quantification analysis is good (all of the sensitivities, specificities and accuracies are greater than 80%); especially the sensitivity, specificity and accuracy of the recurrence rate achieved 92.17%, 91.75% and 92.00%. When combines the approximate entropy and recurrence quantification analysis features with convolutional neural networks to automatically differentiate seizure electroencephalogram from normal recordings, the classification result can reach to 98.84%, 99.35% and 99.26%. Thus, this makes automatic detection of epileptic recordings become possible and it would be a valuable tool for the clinical diagnosis and treatment of epilepsy.

The virtual doctor: An interactive clinical-decision-support system based on deep learning for non-invasive prediction of diabetes
Publication date: September 2019
Source: Artificial Intelligence in Medicine, Volume 100
Author(s): Sebastian Spänig, Agnes Emberger-Klein, Jan-Peter Sowa, Ali Canbay, Klaus Menrad, Dominik Heider
Abstract
Artificial intelligence (AI) will pave the way to a new era in medicine. However, currently available AI systems do not interact with a patient, e.g., for anamnesis, and thus are only used by the physicians for predictions in diagnosis or prognosis. However, these systems are widely used, e.g., in diabetes or cancer prediction.
In the current study, we developed an AI that is able to interact with a patient (virtual doctor) by using a speech recognition and speech synthesis system and thus can autonomously interact with the patient, which is particularly important for, e.g., rural areas, where the availability of primary medical care is strongly limited by low population densities. As a proof-of-concept, the system is able to predict type 2 diabetes mellitus (T2DM) based on non-invasive sensors and deep neural networks. Moreover, the system provides an easy-to-interpret probability estimation for T2DM for a given patient. Besides the development of the AI, we further analyzed the acceptance of young people for AI in healthcare to estimate the impact of such a system in the future.

The role of medical smartphone apps in clinical decision-support: A literature review
Publication date: September 2019
Source: Artificial Intelligence in Medicine, Volume 100
Author(s): Helena A. Watson, Rachel M. Tribe, Andrew H. Shennan
Abstract
Introduction
The now ubiquitous smartphone has huge potential to assist clinical decision-making across the globe. However, the rapid pace of digitalisation contrasts starkly with the slower rate of medical research and publication. This review explores the evidence base that exists to validate and evaluate the use of medical decision-support apps. The resultant findings will inform appropriate and pragmatic evaluation strategies for future clinical app developers and provide a scientific and cultural context for research priorities in this field.
Method
Medline, Embase and Cochrane databases were searched for clinical trials concerning decision support and smart phones from 2007 (introduction of first smartphone iPhone) until January 2019.
Results
Following exclusions, 48 trials and one Cochrane review were included for final analysis. Whilst diagnostic accuracy studies are plentiful, clinical trials are scarce. App research methodology was further interrogated according to setting and decision-support modality: e.g. camera-based, guideline-based, predictive models. Description of app development pathways and regulation were highly varied. Global health emerged as an early adopter of decision-support apps and this field is leading implementation and evaluation.
Conclusion
Clinical decision-support apps have considerable potential to enhance access to care and quality of care, but the medical community must rise to the challenge of modernising its approach if it is truly committed to capitalising on the opportunities of digitalisation.

Ten years of knowledge representation for health care (2009–2018): Topics, trends, and challenges
Publication date: September 2019
Source: Artificial Intelligence in Medicine, Volume 100
Author(s): David Riaño, Mor Peleg, Annette ten Teije
Abstract
Background
In the last ten years, the international workshop on knowledge representation for health care (KR4HC) has hosted outstanding contributions of the artificial intelligence in medicine community pertaining to the formalization and representation of medical knowledge for supporting clinical care. Contributions regarding modeling languages, technologies and methodologies to produce these models, their incorporation into medical decision support systems, and practical applications in concrete medical settings have been the main contributions and the basis to define the evolution of this field across Europe and worldwide.
Objectives
Carry out a review of the papers accepted in KR4HC in the 2009–2018 decade, analyze and characterize the topics and trends within this field, and identify challenges for the evolution of the area in the near future.
Methods
We reviewed the title, the abstract, and the keywords of the 112 papers that were accepted to the workshop, identified the medical and technological topics involved in these works, provided a classification of these papers in medical and technological perspectives and obtained the timeline of these topics in order to determine interest growths and declines. The experience of the authors in the field and the evidences after the review were the basis to propose a list of challenges of knowledge representation in health care for the future.
Results
The most generic knowledge representation methods are ontologies (31%), semantic web related formalisms (26%), decision tables and rules (19%), logic (14%), and probabilistic models (10%). From a medical informatics perspective, knowledge is mainly represented as computer interpretable clinical guidelines (43%), medical domain ontologies (26%), and electronic health care records (22%). Within the knowledge lifecycle, contributions are found in knowledge generation (38%), knowledge specification (24%), exception detection and management (12%), knowledge enactment (8%), temporal knowledge and reasoning (7%), and knowledge sharing and maintenance (7%). The clinical emphasis of knowledge is mainly related to clinical treatments (27%), diagnosis (13%), clinical quality indicators (13%), and guideline integration for multimorbid patients (12%). According to the level of development of the works presented, we distinguished four maturity levels: formal (22%), implementation (52%), testing (13%), and deployment (2%) levels. Some papers described technologies for specific clinical issues or diseases, mainly cancer (22%) and diseases of the circulatory system (20%). Chronicity and comorbidity were present in 10% and 8% of the papers, respectively.
Conclusions
KR4HC is a stable community, still active after ten years. A persistent focus has been knowledge representation, with an emphasis on semantic-web ontologies and on clinical-guideline based decision-support. Among others, two topics receive growing attention: integration of computer-interpretable guideline knowledge for the management of multimorbidity patients, and patient empowerment and patient-centric care.

Automated plaque classification using computed tomography angiography and Gabor transformations
Publication date: September 2019
Source: Artificial Intelligence in Medicine, Volume 100
Author(s): U. Rajendra Acharya, Kristen M. Meiburger, Joel En Wei Koh, Jahmunah Vicnesh, Edward J. Ciaccio, Oh Shu Lih, Sock Keow Tan, Raja Rizal Azman Raja Aman, Filippo Molinari, Kwan Hoong Ng
Abstract
Cardiovascular diseases are the primary cause of death globally. These are often associated with atherosclerosis. This inflammation process triggers important variations in the coronary arteries (CA) and can lead to coronary artery disease (CAD). The presence of CA calcification (CAC) has recently been shown to be a strong predictor of CAD. In this clinical setting, computed tomography angiography (CTA) has begun to play a crucial role as a non-intrusive imaging method to characterize and study CA plaques. Herein, we describe an automated algorithm to classify plaque as either normal, calcified, or non-calcified using 2646 CTA images acquired from 73 patients. The automated technique is based on various features that are extracted from the Gabor transform of the acquired CTA images. Specifically, seven features are extracted from the Gabor coefficients : energy, and Kapur, Max, Rényi, Shannon, Vajda, and Yager entropies. The features were then ordered based on the F-value and input to numerous classification methods to achieve the best classification accuracy with the least number of features. Moreover, two well-known feature reduction techniques were employed, and the features acquired were also ranked according to F-value and input to several classifiers. The best classification results were obtained using all computed features without the employment of feature reduction, using a probabilistic neural network. An accuracy, positive predictive value, sensitivity, and specificity of 89.09%, 91.70%, 91.83% and 83.70% was obtained, respectively. Based on these results, it is evident that the technique can be helpful in the automated classification of plaques present in CTA images, and may become an important tool to reduce procedural costs and patient radiation dose. This could also aid clinicians in plaque diagnostics.

Leveraging implicit expert knowledge for non-circular machine learning in sepsis prediction
Publication date: September 2019
Source: Artificial Intelligence in Medicine, Volume 100
Author(s): Shigehiko Schamoni, Holger A. Lindner, Verena Schneider-Lindner, Manfred Thiel, Stefan Riezler
Abstract
Sepsis is the leading cause of death in non-coronary intensive care units. Moreover, a delay of antibiotic treatment of patients with severe sepsis by only few hours is associated with increased mortality. This insight makes accurate models for early prediction of sepsis a key task in machine learning for healthcare. Previous approaches have achieved high AUROC by learning from electronic health records where sepsis labels were defined automatically following established clinical criteria. We argue that the practice of incorporating the clinical criteria that are used to automatically define ground truth sepsis labels as features of severity scoring models is inherently circular and compromises the validity of the proposed approaches. We propose to create an independent ground truth for sepsis research by exploiting implicit knowledge of clinical practitioners via an electronic questionnaire which records attending physicians’ daily judgements of patients’ sepsis status. We show that despite its small size, our dataset allows to achieve state-of-the-art AUROC scores. An inspection of learned weights for standardized features of the linear model lets us infer potentially surprising feature contributions and allows to interpret seemingly counterintuitive findings.

On strategic choices faced by large pharmaceutical laboratories and their effect on innovation risk under fuzzy conditions
Publication date: September 2019
Source: Artificial Intelligence in Medicine, Volume 100
Author(s): Javier Puente, Fernando Gascon, Borja Ponte, David de la Fuente
Abstract
Objectives
We develop a fuzzy evaluation model that provides managers at different responsibility levels in pharmaceutical laboratories with a rich picture of their innovation risk as well as that of competitors. This would help them take better strategic decisions around the management of their present and future portfolio of clinical trials in an uncertain environment. Through three structured fuzzy inference systems (FISs), the model evaluates the overall innovation risk of the laboratories by capturing the financial and pipeline sides of the risk.
Methods and materials
Three FISs, based on the Mamdani model, determine the level of innovation risk of large pharmaceutical laboratories according to their strategic choices. Two subsystems measure different aspects of innovation risk while the third one builds on the results of the previous two. In all of them, both the partitions of the variables and the rules of the knowledge base are agreed through an innovative 2-tuple-based method. With the aid of experts, we have embedded knowledge into the FIS and later validated the model.
Results
In an empirical application of the proposed methodology, we evaluate a sample of 31 large pharmaceutical laboratories in the period 2008–2013. Depending on the relative weight of the two subsystems in the first layer (capturing the financial and the pipeline sides of innovation risk), we estimate the overall risk. Comparisons across laboratories are made and graphical surfaces are analyzed in order to interpret our results. We have also run regressions to better understand the implications of our results.
Conclusions
The main contribution of this work is the development of an innovative fuzzy evaluation model that is useful for analyzing the innovation risk characteristics of large pharmaceutical laboratories given their strategic choices. The methodology is valid for carrying out a systematic analysis of the potential for developing new drugs over time and in a stable manner while managing the risks involved. We provide all the necessary tools and datasets to facilitate the replication of our system, which also may be easily applied to other settings.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου