ABSTRACT

With the rise of Artificial Intelligence (AI) and the input of Big Data and algorithms, more and more decisions can now be taken automatically in many fields such as financial services and insurance.

Although recourse to new technologies can be a precious tool for insurers, allowing to reveal patterns and correlations, generate new solutions and predict certain events or behaviours, it is becoming increasingly important to be able to verify whether intelligent systems comply with existing legal frameworks. In particular, transparency and explainability are amongst the most demanding issues related to the use of automated decision-making systems.

First, this paper briefly introduces the new technologies supporting data management and automated decision-making, with a focus on their applications in the insurance business.

Subsequently, this paper presents the main legal challenges related to automated decisions, with particular reference to the issue of a ‘right to an explanation’ for data subjects as observed in the General Data Protection Regulation (GDPR).

Finally, this paper tries to formulate a synthesis between different views on the issue of algorithmic explainability, presenting possible solutions to foster a constructive dialogue among the different actors involved, from innovation facilitators to new multidisciplinary research fields.