Title Interpretability in Computational Intelligence
Speaker Dr. Paulo Lisboa
Chair Tuan Pham

Abstract
Theoretical advances in machine learning have been reflected in many research implementations including in the domains of clinical health data and computational intelligence in bioinformatics. However this has not been reflected in a large number of practical applications used by domain experts. This bottleneck is in a significant part due to lack of interpretability of the non-linear models derived from data. This lecture will explore legal and practical constraints on the use of computational intelligence systems by third parties in the context of applications in the health domain. It will then review four broad categories of interpretability in machine learning – nomograms, rule induction, graphical models & topographic mapping – leading to an overview of exciting current developments in each of these domains as well as promising new directions in sparsity models and data networks derived from information geometry.

Biography
Paulo Lisboa is Professor in Industrial Mathematics in the School of Computing and Mathematical Sciences at Liverpool John Moores University and Research Professor at St Helens & Knowsley Teaching Hospitals. His research is focused on computer-based decision support in healthcare and data analytics in public health, sports science and computational marketing. In particular he has an interest in principled approaches to interpretable modelling of non-linear data and processes.

He has over 250 refereed publications with awards for citations. He chairs the Medical Data Analysis Task Force in the Data Mining Technical Committee of the IEEE-CIS and is Associate Editor for IET Science Measurement and Technology, Neural Computing Applications, Applied Soft Computing and Source Code for Biology and Medicine. He is also a member of the EPSRC Peer Review College and an expert evaluator for the European Community DG-INFSO.