Paper accepted in IEEE e-science 2018 – Nicky Nicolson

Title: Specimens as research objects: reconciliation across distributed repositories to enable metadata propagation

Authors: Nicky Nicolson (n.nicolson@kew.org)1,3, Alan Paton2, Sarah Phillips2, Allan Tucker3

Affiliations: 1. Biodiversity Informatics & Spatial Analysis, RBG Kew (UK), 2.Collections, RBG Kew (UK) 3. Department of Computer Science, Brunel University London (UK)

Abstract: Botanical specimens are shared as long-term consultable research objects in a global network of specimen repositories. Multiple specimens are generated from a shared field collection event; generated specimens are then managed individually in separate repositories and independently augmented with research and management metadata which could be propagated to their duplicate peers. Establishing a data-derived network for metadata propagation will enable the reconciliation of closely related specimens which are currently dispersed, unconnected and managed independently. Following a data mining exercise applied to an aggregated dataset of 19,827,998 specimen records from 292 separate specimen repositories, 36% or 7,102,710 specimens are assessed to participate in duplication relationships, allowing the propagation of metadata among the participants in these relationships, totalling: 93,044 type citations, 1,121,865 georeferences, 1,097,168 images and 2,191,179 scientific name determinations. The results enable the creation of networks to identify which repositories could work in collaboration. Some classes of annotation (particularly those regarding scientific name determinations) represent units of scientific work: appropriate management of this data would allow the accumulation of scholarly credit to individual researchers: potential further work in this area is discussed.

Conference website: https://www.escience2018.com/

Preprint:: https://arxiv.org/abs/1809.07725

Paper accepted in IntelliSys 2018 – Samy Ayed

Title: An Exploratory Study of the Inputs for Ensemble Clustering Technique as a Subset Selection Problem

Authors: Samy Ayed, Mahir Arzoky, Stephen Swift, Steve Counsell and Allan Tucker

Abstract: Ensemble and Consensus Clustering address the problem of unifying multiple clustering results into a single output to best reflect the agreement of input methods. They can be used to obtain more stable and robust clustering results in comparison with a single clustering approach. In this study, we propose a novel subset selection method that looks at controlling the number of clustering inputs and datasets in an efficient way. The authors propose a number of manual selection and heuristic search techniques to perform the selection. Our investigation and experiments demonstrate very promising results. Using these techniques can ensure better selection methods and datasets for Ensemble and Consensus Clustering and thus more efficient clustering results.

Conference:  Intelligent Systems Conference (IntelliSys) 2018, London.

The paper will publish in Springer LNCS Proceedings.

New PhD student

Welcome to Ben Evans who has been awarded a prestigious London NERC DTP scholarship on the project “A global canonical image data set for automatic species classification” working with the Zoological Society of London and Google.

   

New Seminar Series in IDA

We have a new funded seminar series in the IDA group starting in October 2018 on the theme of “Opening the Black Box”.

Please look out for details on the website here in the upcoming months

Summer Short Course – Data Analysis and R (11th-12th Jul)

Making Sense out of Software Engineering Data And an introduction to R

Prof Sandro Morasca, Università degli studi dell’Insubria, Italy

The FREE summer short course (funded by Erasmus+) was organised by Prof Martin Shepperd on 11-12 July, 2018 (13:00-17:00 in WLFB208).

The course addressed the techniques that can be sensibly used to extract knowledge out of Software Engineering data acquired via experiments or routine data collection in industrial contexts, to make it practically useful. The course described and critically discussed a number of data analysis techniques, by explaining their preconditions and their outcomes. The course illustrated both basic, traditional techniques and innovative ones, like those based on Robust Regression or machine learning.  Also, it  explained how the models obtained can be validated.

A big thank you to Sandro and Martin for running this fantastic short course.

Lecture slides can be found here.

IDA Meeting (4th Jul 2018)

IDA meeting held at WLFB 207/208 (2nd floor of Wilfred Brown) at 3:00PM

Talk from Natalia Viani, King’s College London

Abstract
Electronic health records represent a great source of valuable information for both patient care and biomedical research. Despite the efforts put into collecting structured data, a lot of information is available only in the form of free-text. For this reason, developing natural language processing (NLP) systems that identify clinically relevant concepts (e.g., symptoms, medication) is essential. Moreover, contextualizing these concepts from the temporal point of view represents an important step.
Over the past years, many NLP systems have been developed to process clinical texts written in English and belonging to specific medical domains (e.g., intensive care unit, oncology). However, research for multiple languages and domains is still limited. Through my PhD years, I applied information extraction techniques to the analysis of medical reports written in Italian, with a focus on the cardiology domain. In particular, I explored different methods for extracting clinical events and their attributes, as well as temporal expressions. At the moment, I am working on the analysis of mental health records for patients with a diagnosis of schizophrenia, with the aim to automatically identify symptom onset information starting from clinical notes.

Dr Viani is a postdoctoral research associate at the Department of Psychological Medicine, NIHR Biomedical Research Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London. She received her PhD in Bioengineering and Bioinformatics from the Department of Electrical, Computer and Biomedical Engineering, University of Pavia, in January 2018. During her PhD, she spent six months as a visiting research scholar in the Natural Language Processing Laboratory at the Computational Health Informatics Program at Boston Children’s Hospital – Harvard Medical School. Her research interests are natural language processing, clinical and temporal information extraction, and biomedical informatics. I am especially interested in the reconstruction of clinical timelines starting from free-text.

Slides from the talk can be found here.

Machine Learning Reading Group (4th Jul 2018)

The Machine Learning Reading Group was held on 04/07/2018 1:30 PM (IDA/BSEL Lab). The core concept for this meeting is Random forests  and the proposed article to discuss is  “Prediction of the FIFA World Cup 2018 – A random forest approach with an emphasis on estimated team ability parameters” https://arxiv.org/pdf/1806.03208.pdf

A short presentation on Random forests can be found here.

CBMS 2018 Best Student Paper – Leila Yousefi

Congratulations to Leila Yousefi who won best student paper at IEEE CBMS 2018. The paper is titled “Predicting Disease Complications Using a Step-Wise Hidden Variable Approach for Learning Dynamic Bayesian Networks”

Below is the abstract and full list of authors.

Predicting Diabetes Type 2 Mellitus (T2DM) complications such as retinopathy and liver disease is still a challenge despite being a growing public health concern worldwide. This is due to the complex interactions between complications and other features, as well as between the different complications, themselves. What is more, there are likely to be many unmeasured effects that impact the disease progression of different patients. Probabilistic graphical models such as Dynamic Bayesian Networks (DBNs) have demonstrated much promise in the modeling of disease progression and they can naturally incorporate hidden (latent) variables using the EM algorithm. Unlike deep learning approaches that attempt to model complex interactions in data by using a large number of hidden variables, we adopt a different approach. We are interested in models that not only capture unmeasured effects but are also transparent in how they model data so that knowledge about disease processes can be extracted and trust in the model can be maintained by clinicians. As a result, we have developed a step-wise hidden variable structure learning process that incrementally adds hidden variables based on the IC* algorithm. To the best of our knowledge, this is the first study for classifying disease complication using a step-wise learning methodology for identifying hidden and T2DM features with a DBN structure from clinical data. Our extensive set of experiments show that the proposed method improves classification accuracy, identifying the correct number of hidden variables, and targeting their precise location within the network structure.

Leila Yousefi, Allan Tucker, Mashael Al-luhaybi, Lucia Saachi, Riccardo Bellazzi and Luca Chiovato.

Welly done Lilly!