ExplainabilitySAREFMachine Learning

Explainable AI using visual Machine Learning visualisations

Vrije Univeriteit Amsterdam
January 11, 2023

InterConnect gathers 50 European entities to develop and demonstrate advanced solutions for connecting and converging digital homes and buildings with the electricity sector

Machine Learning (ML) algorithms play a significant role in the InterConnect project. Most prominent are the services that do some kind of forecasting like predicting energy consumption for (Smart) devices and households in general.


The prime goal of InterConnect is standardization of the interaction between Smart devices, sensors and services. Therefore, including a uniform interface to the services that are capable of applying machine learning in a standardized manner benefits the re-use and improve the adoption of them. SAREF and its extensions are the schemas and vocabularies that we use to express the capabilities, interaction details and other aspects of the Smart components and facilitating services. These include measurement parameters and values. This opens the possibility to explore how measurements expressed in SAREF can automatically be converted to input expected by well known machine learning algorithms that can deal with these specific type of input. For example, a classification task is based on a fixed set of values (e.g. ‘open’, ‘closed’) and an estimation task often uses a numerical range (e.g. ’24.2 °C’ ).


Explainability of (the outcomes of) AI algorithms is becoming an important research domain where users want to know why an algorithm came to a certain result. The European Union has expressed the need to address this important aspect through clear guidelines.



SAREF allows us to standardize input formats for common ML approaches and that explainability can be increased by selecting algorithms that inherently have these features (e.g. Decision Trees) and by using interactive web environments like Jupyter Notebooks a convenient solution for users is created where step by step the algorithmic procedures can be followed and visualized and forms an implementation example for explainable AI (fig.2).


Below is a video that shows the approach and an example of how the visualization of ML models can contribute to Explainable AI.


For more information please visit the GitHub page containing various examples.


Next article

The H2020 project InterConnect launches its first Open Call on Interoperable-by-design Prototypes to support ICT/Energy SMEs and Startups to develop a[...]

June 2, 2022