Fundamental Rights Impact Assessment (FRIA)
An assessment required under the EU AI Act for deployers of high-risk AI systems that evaluates the system's impact on fundamental rights — including non-discrimination, privacy, freedom of expression, and human dignity — before deployment begins.
Why It Matters
FRIAs expand the lens beyond data privacy to cover the full spectrum of human rights. An AI system might be privacy-compliant but still undermine freedom of expression or entrench discrimination.
Example
A government agency deploying AI for social benefit eligibility screening must conduct a FRIA examining whether the system could disproportionately deny benefits to specific ethnic groups, people with disabilities, or non-native language speakers.
Think of it like...
If a DPIA asks 'is the patient's data safe?' a FRIA asks 'is the patient being treated fairly, with dignity, and with respect for their autonomy?'
Related Terms
Algorithmic Impact Assessment (AIA)
A systematic process to evaluate the potential impacts of deploying an algorithmic system on individuals, groups, and society. It identifies risks before deployment and maps out mitigation strategies, serving as both a compliance tool and a design checkpoint.
Data Protection Impact Assessment (DPIA)
A systematic assessment of the potential impact of data processing activities on the rights and freedoms of individuals. Required under GDPR for high-risk processing, a DPIA is particularly relevant for AI systems that process personal data at scale or make automated decisions about people.
EU AI Act
The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels. It categorizes AI systems from minimal to unacceptable risk with corresponding compliance requirements.