
A new model for Artificial Intelligence design and development

On the occasion of the International Data Protection Day (Jan. 28), the Catalan Data Protection Authority (Autoritat Catalana de Protecció de Dades - APDCAT) presented to the Parliament of Catalonia a pioneering model in Europe for the development of Artificial Intelligence (AI) solutions that respect the fundamental rights of human beings.
AI is increasingly present in our society, with direct involvement in decision-making processes in many fields. This has major implications for our communities, as this technology and its use of data can be used to make decisions that directly affect the lives of individuals, groups, and society as a whole.
Over the past year, APDCAT has developed a pioneering model in Europe for assessing the impact on fundamental rights in the use of artificial intelligence (FRIA), in line with the requirements of the new European Artificial Intelligence Regulation (AI Act). The document, which is intended to be a reference for other organizations that need to implement a FRIA, was developed within the “DPD in network” working group led by Professor Alessandro Mantelero, professor of Private Law and Law & Technology at the PoliTO and an international expert on AI and fundamental rights.
“This is the result of a challenging project led by APDCAT, and I am very proud of the work done over the past year with APDCAT and the various public and private entities on this project,” comments Prof. Mantelero. "The impact on fundamental rights is a key component of both compliance assessment and assessment under Article 27 of the European AI Act, but there is a lack of practical methodologies and models to carry it out. Some of the proposed methodologies have shortcomings, others have not been tested in concrete cases or lack detailed evidence of their performance."
Development work on the new Catalan model involved creating an evaluation methodology and template aimed at simplifying the process of evaluating and designing AI systems. Compared to other previously proposed templates, which cover a variety of issues with limited focus towards fundamental rights, FRIA's new template combines risk assessment methodology with the existing legal framework.
The developers adopted an empirical case-based approach, which is crucial to testing the very effectiveness of the proposed model in achieving FRIA's policy objectives. The use cases conducted, all within high-risk areas under the AI Act, demonstrated that it is possible to streamline the FRIA procedure by avoiding the adoption of long checklists and focusing instead on the core elements of AI's impact on fundamental rights. The use cases also demonstrated a good level of usability of the tool: people with appropriate backgrounds can complete the FRIA with limited effort.
The proposed methodology combines the traditional approach to risk management with that specific to fundamental rights, contextualizing the key variables of risk likelihood and severity referring to the respective legal framework. The three blocks constituting the FRIA model (planning and scoping, data collection and risk analysis, risk management) make use of different technical solutions, from guidance questionnaires to risk matrices, with a view to an assessment that must always be contextual and focused on the specific application of AI, identifying its impact on different rights and the appropriate measures aimed at mitigating it from a “trustworthy AI” perspective.
“FRIA, if properly designed, does not impose an undue additional burden on EU public and private entities in complying with the AI Act,” Mantelero concludes. "In terms of areas covered, the use cases relate to four of the key areas listed in Annex III of the AI Act, namely university education, wokers’ management, access to healthcare, and welfare services. The nature of the use cases discussed will also make them useful to many other public and private entities in other countries interested in designing fundamental rights-compliant AI systems or models in these particularly sensitive areas of our lives."