Each year, the BRAINTEASER project organizes an open evaluation challenge, called iDPP@CLEF (Intelligent Disease Progression Prediction), to involve research groups from academia and industry in the assessment of the performance of their AI algorithms to predict the progression of ALS and MS. These challenges are open to anyone wishing to participate and they are built around real clinical and sensor data (properly anonymized), provided by the clinical partners in our consortium.

The challenges represent a unique opportunity for researchers to access highly curated data and discuss their own approaches with other researchers interested in the same topics. Indeed, since all the groups participating in a challenge operate on the same datasets, it is possible to directly contrast the performance of all the proposed approaches and solutions, in order to comparatively understand what works best and what does not work.

Every year we organize a workshop at the end of the challenge, where participants can meet and discuss together face-to-face what they did, what worked and why. Moreover, peer-reviewed papers by participants, describing their approaches, and overview papers by organizers, summarizing the main findings of each edition, are published online in the CEUR-WS proceedings series, which grants free and open access to all of them. In this way, we accelerate knowledge transfer and impact both inside and outside the project, because the best-of-breed approaches are quickly shared with everyone interested in the research community, but also industry, policy makers, and even general public, if concerned. Least, but not last, the challenge helps in building a cohesive research community, allowing them to network and learn from each other, and lays the foundations for a durable impact in the field, also after the end of the project.

This year, in the iDPP@CLEF 2023 challenge, we organized three tasks: two of them were focused on the progression of MS and one of them on the progression of ALS. The first task dealt with the prediction of MS worsening, according to clinical standards formulated on the EDSS (Expanded Disability Status Scale) score.

The second task built on the first one and investigated the probability of MS worsening in a time window, say 2, 4, 6 or 8 years. The third task explored the impact of pollutants on the worsening of ALS and whether they were useful to predict time to PEG (Percutaneous Endoscopic Gastrostomy), NIV (Non-Invasive Ventilation), or death.

Data and their quality are key concerns for BRAINTEASER, since both the predictive AI algorithms developed by the consortium and the open evaluation challenges revolve around them. To this end, BRAINTEASER fully embraces the Open Science and FAIR (Findable, Accessible, Interoperable, Reusable) principles and puts lots of effort in curating the developed datasets. This happens through various means: we designed an ontology to semantically model the clinical and sensor data we work with and to ensure their correctness; when ingesting clinical and sensor data into the knowledge base informed by our ontology, we adopt strict cleaning and filtering procedures to ensure the correctness of the ingested instances; the training and test data we use internally and in the challenges are then derived from such highly curated knowledge base; and, finally, the challenges themselves represent a further validation of our datasets, since external research groups access the datasets and can verify that they are appropriate for developing AI algorithms.

Since data are the fuel of research, after all the above quality checks and curation steps, we release our datasets for free to anyone wishing to conduct further research by sharing and integrating them in the European Open Science Cloud (EOSC) via Zenodo. However, we do not simply put our datasets out there in the wide but we share them in a responsible way. Indeed, we ask requesters to submit a project describing which use and what kind of inferences they plan to do with our data and a committee of experts (both medical doctors and computer scientists) verifies that the intended use of the data is up to high ethical, clinical, and scientific standards.

Listen to Nicola Ferro, Challenge organiser and full-time computer science professor (University of Padua – IT) to have more precious insights about this BRAINTEASER initiative!