Fundamental Rights Impact Assessments (FRIA)

We welcome all feedback and recommendations for improvement.

Fundamental Rights Impact 

Assessments (FRIA)

 

Overview

The use of AAA systems can adversely affect the fundamental rights of individuals, to help address these concerns the EU AI Act and ForHumanity Certification Schemes require a number of provisions to help protect individuals from harm. One of these provisions is the requirement for deployers of AAA systems to conduct a Fundamental Rights Impact Assessment (FRIA) in the specific context of use, for all high-risk AAA systems. A Fundamental Rights Impact Assessment (FRIA) is a public disclosure document performed by organisations (deployers) and shall be performed by the Ethics Committee or equivalent body within the organisation.

This document will outline a comprehensive strategy for mitigating any direct or indirect adverse effects on individuals fundamental rights. The ethics committee must carefully evaluate the intended purpose, geographical and temporal scope of utilisation, categories of affected individuals and groups, specific risks faced by marginalised communities, and potential environmental implications, such as energy consumption. Additionally, the Fundamental Rights Impact Assessment (FRIA) should encompass adherence to both EU and national legislations, fundamental rights laws, and potential negative impacts on EU values.  For public authorities, considerations regarding democracy, the rule of law, and the allocation of public funding should also be considered.

The Fundamental Rights Impact Assessment (FRIA) shall also be conducted in conjunction with the Data Protection Impact Assessment (DPIA) (if required), to which the Data Protection Impact Assessment (DPIA) shall be published as an addendum.

What is a high risk AAA system?

The EU AI Act adopts a risk-based approach to AAA systems using the pyramid of criticality. It establishes different rules and obligations for providers, importers, distributors, deployers and authorised representatives depending on the level of risk from the AAA system. This means lesser obligations apply to AAA systems that pose a negligible risk to humans, whilst some AI systems that pose unacceptable risk would be banned. A Fundamental Rights Impact Assessments (FRIA) is required to be conducted for all high risk AAA systems.

 

The different risk levels identified are:

 

UNACCEPTABLE RISK

Systems that are considered a threat to people such as cognitive behavioural manipulation of people (i.e. voice activated toys that encourage dangerous behaviour in children), social scoring (i.e. classifying people based on behaviour or socio-economic status, etc.), and real-time and remote biometric identification systems (i.e. facial recognition). 

 

 

These systems will be banned under the EU AI Act, with exceptions for military use, and national defence, etc.

 

HIGH RISK

Systems that pose a risk of harm to the health and safety or pose a risk of adverse impact on individuals fundamental rights. This includes AAA systems that are used in products that fall under the EU’s product safety legislation, this includes cars, aviation, medical devices, lifts and toys, along with AAA systems falling into eight specific categories: 

  • Biometric identification and categorisation of natural persons

  • Management and operation of critical infrastructure

  • Education and vocational training

  • Employment, worker management and access to self-employment

  • Access to and enjoyment of essential private services and public services and benefits

  • Law enforcement

  • Migration, asylum & border control management

  • Assistance in legal interpretation and application of the law.

These systems are subjected to extensive regulation, and require deployers of AAA systems to conduct a Fundamental Rights Impact Assessments (FRIA) on the AAA system performed by the ethics committee. 

LIMITED RISK

Systems that pose limited risk to individuals such as Chatbots, Generative AI, and AAA systems that generate or manipulate images, audio or video content (i.e. deepfakes). These systems will have to comply with transparency requirements, such as disclosing that the content was created by AI.

 

MINIMAL RISK

Systems that pose minimal or low risk to individuals such as AI used in computer games, and AI based spam filters, etc. These systems are not subjected to any restrictions and account for the majority of AI systems used in the market today (2023).

 

What are individual's human rights?

The EU Charter of Fundamental Rights is a proclamation by the European Parliament, the Council, and the Commission. It establishes the common values upon which the EU is built, including human dignity, freedom, equality, and solidarity. The Charter emphasises the principles of democracy and the rule of law and places the individual at the centre. It aims to preserve and develop these values while respecting the diversity of European cultures and national identities. The Charter promotes balanced and sustainable development, free movement of people, goods, services, and capital, and the freedom of establishment. It strengthens the protection of fundamental rights in light of societal changes, social progress, and scientific advancements. The Charter draws on constitutional traditions, international obligations, social charters, and relevant court rulings. It emphasises that the enjoyment of rights comes with responsibilities towards others, the human community, and future generations. The EU Charter of Fundamental Rights recognizes the following fundamental rights:

  1. Human Dignity

  2. Right to Life

  3. Right to Integrity of the Person

  4. Prohibition of Torture & Inhuman or Degrading Treatment or Punishment

  5. Prohibition of Slavery and Forced Labour

  6. Right to Liberty and Security

  7. Respect for Private and Family Life

  8. Protection of Personal Data

  9. Right to Marry and Right to Found a Family

  10. Freedom of Thought, Conscience, and Religion

  11. Freedom of Expression and Information

  12. Freedom of Assembly and Association

  13. Freedom of Arts and Sciences

  14. Right to Education

  15. Freedom to Choose an Occupation 

  16. Right to Property

  17. Right to Asylum

  18. Right to Subsidiary Protection

  19. Right to Move and Reside Freely Within the EU

  20. Right to Diplomatic and Consular Protection

  21. Right to Vote and Stand as a Candidate in Elections to the European Parliament

  22. Right of Access to Documents

  23. Right to Good Administration

  24. Right to Effective Judicial Protection

  25. Right to a Fair Trial

  26. Presumption of Innocence & Right of Defense

  27. Principles of Legality & Non-Retroactivity of Criminal Offences & Penalties

  28. Right Not to Be Tried or Punished Twice in Criminal Proceedings

  29. Equality Before the Law

  30. Non-Discrimination

  31. Cultural, Religious, and Linguistic Diversity

  32. Equality Between Men and Women

  33. Best Interests of the Child

  34. Rights of the Child

  35. Protection of Family and Professional Life

  36. Protection Against Disability Discrimination

  37. Environmental Protection

  38. Consumer Protection

  39. Right to Access Healthcare

  40. Right of Older Persons Dignity & Independence

  41. Right to Social Security and Social Assistance

  42. Right to Housing

  43. Right to Collective Bargaining and Action

  44. Right to Strike

  45. Freedom to Conduct a Business

  46. Intellectual Property

  47. Right to an Effective Remedy and to a Fair Trial

  48. Right to Judicial Review

  49. Proportionality of Criminal Offences & Penalties

  50. Right to Data Protection

  51. Right to Humanitarian Aid

 

It is important when conducting Fundamental Rights Impact Assessments (FRIA), the ethics committee considers any potential direct or indirect impacts on each of these Fundamental Rights.

 

Who are vulnerable groups?

Organisations should strive to ensure consistent and high levels of protection against discrimination on all grounds, including (at a minimum) sex, racial or ethnic origin, religion or belief, disability, age, sexual orientation, gender identity and gender expression in different areas of life.

 

The EU recognizes and acknowledges several categories of vulnerable groups within its policies and directives. While the specific definitions and categorisations may vary across different EU laws and initiatives, here are some commonly recognized vulnerable groups included but not limited to:

  • Children: Minors under the age of 18 who require special protection and care due to their age and vulnerability.

  • Persons with Disabilities, Serious illnesses and/or mental disorders: Conditions that may hinder their full and equal participation in society.

  • Elderly Persons: Older individuals who may face challenges related to ageing, health, social inclusion, or access to appropriate care & support.

  • LGBTIQ+ Persons: Individuals who identify as lesbian, gay, bisexual, transgender, intersex, or queer and may face specific challenges related to discrimination, social exclusion, and human rights.

  • Women and Girls: Recognizing gender-specific vulnerabilities, the EU emphasises the protection and promotion of the rights of women and girls, including combating gender-based violence and promoting gender equality.

  • Ethnic and Religious Minorities: Minority groups that may face discrimination, marginalisation, or persecution based on their ethnic, cultural, or religious background.

  • Asylum Seekers and Refugees: Individuals who have fled their home countries due to fear of persecution, war, or violence and have sought international protection within the EU.

  • Victims of Human Trafficking: Individuals who have been subjected to forced labour, sexual exploitation, or other forms of human trafficking.

  • Homeless Persons: Individuals who lack adequate housing.

 

Image by pch.vector on Freepik https://www.freepik.com/free-vector/men-women-welcoming-people-with-disabilities-group-people-meeting-blind-female-character-male-wheelchair_16375447.htm

 

Discrimination can take different forms: 

  • Direct discrimination: takes place when a person receives less favourable treatment than another in a comparable situation.

  • Indirect discrimination: takes place when an apparently neutral provision, criterion or practice puts people with a protected characteristic at a disadvantage compared with others.

  • Discrimination by association: is where a person is treated less favourably based on another person’s protected characteristic, but is not themselves the person with the protected characteristic. 

  • Multiple discrimination: when several grounds of discrimination are involved, where the grounds operate separately

  • Intersectional discrimination: when several grounds of discrimination are involved, where the grounds interact and are inseparable.

For more information, see the FRA Handbook on European nondiscrimination law (2018) and the FRA Handbook on Bias in Algorithms.

Criteria and guidance for Fundamental Rights Impact Assessments (FRIA):

Prior to placing the AAA System on the market, the Ethics Committee shall conduct a Fundamental Rights Impact Assessments (FRIA) which must include as a minimum meet the following criteria:

 

A. Description of the Scope, Nature, Context, Purpose including the expected life of the AAA System



WHAT – Describing the scope of the AAA system: 

The ethics committee shall include a clear outline of the intended geographic scope of the system’s use along with defined boundaries of the AAA system and what is covered and not covered by this Fundamental Rights Impact Assessments (FRIA); this should be any associated ecosystem that either supports, is supported by, and/or integrates with the AAA system.

 

The ethics committee should take into account the nature of the data handled by the AAA system, including special category or criminal offence data. It is important to assess the amount of data collected and used by the AAA system, why the personal data is needed, as well as the duration and data retention periods for the personal data being processed. Furthermore, an examination should be conducted to determine the likely and potential number of individuals impacted by the AAA system and the frequency of processing operations.

 

Image by pch.vector on Freepik https://www.freepik.com/free-vector/diverse-crowd-people-different-ages-races_7732608.htm#query=diverse%20crowd%20of%20people%20of%20different%20ages%20and%20races&position=6&from_view=author

 

HOW – Describing the nature of the AAA system: 

The ethics committee shall describe the nature of the AAA system and specify any intentional or unintentional feeds or inputs that can or may occur that would influence or change expected results or outputs from the AAA system that may impact on an individuals’ Fundamental Rights, (i.e. variable effects on a Automated Employment Decision Tool (AEDT) for CVs submitted in a particular language, or font). The ethics committee shall ensure accessibility of the AAA system and explainability information for users and impacted individuals.

 

The ethics committee should also outline how the AAA system will collect, use, store and delete data, and define what the source of the data will be and if it is likely to be high risk to individuals. They should also give consideration as to who will have access to the data, and if the data will be shared with other parties and/or used for other purposes (i.e. retraining the AAA system, data monetisation, etc.). It could also include diagrams or another way of describing the data flows of the AAA system.

 

 

 

 

WHO – Describing the context of the AAA system: 

The ethics committee shall describe the context of the AAA system and specify if the AAA system is designed for a specific industry or sector (i.e. healthcare, finance, or government) and who are the individuals that will be using or impacted by the AAA system, do they include vulnerable groups that may need adjustments and/or additional safeguarding measures.

 

The ethics committee should also consider what is the relationship between the individuals using or impacted by the AAA system (i.e. employees, customers, etc.), if they would expect their data to be used in this way, and how much control individuals will have over the AAA system. They should also consider what is the current state/maturity of the technology in this area and if there are any concerns (i.e. public, ethical or security concerns) over the use of this type of AAA system that should be factored in by individuals when using this AAA system.

 

WHY – Describing the purpose of the AAA system:

The ethics committee shall include a clear outline of the intended purpose for which the system will be used, and define the aim or goal of the AAA system.

 

The ethics committee should consider what are the business objectives and what does it want to achieve with this AAA system. They should also outline what the intended effect(s) on individuals are, and any benefits for the individuals using and/or impact by the AAA system, the organisation, and more broadly for society as a whole. They shall ensure purpose limitation by ensuring the use of the AAA system is necessary to achieve this objective, and there are no less impactful methods to achieve the same purpose.

 

HOW LONG – Describing the life expectancy of the AAA System:

The ethics committee shall include a clear outline of the intended temporal scope of the system’s use and how long the AAA system is likely to be on the market without significant change taking place.

 

The ethics committee should consider what is the threshold for determining when a significant change in the design or purpose of the AAA system has occurred, and any potential changes in technology, legislation, market, business objectives, etc. that may force a significant change sooner than expected and/or change/affect the expected life of the AAA system.

 

Image by Freepik https://www.freepik.com/free-vector/helping-hand-support-background-flat-style_2098982.htm

 

“The ethics committee shall include a clear outline of the intended purpose for which the AAA system will be used.”

 

B. Description of categories of groups and natural persons likely to be impacted

Describing the categories of individuals affected:

The ethics committee shall identify the categories of any individuals and groups of people (i.e. employees, customers, members, students, etc.) who are likely to be impacted by the AAA system.

 

The ethics committee should also specify the categories of any individuals and groups that will be using the AAA system if different from those that will be impacted, and the relationship between them and any bias between them (i.e. , teachers using a AAA system that may impact pupils, or police officers using a AAA system that may impact suspects, etc.).

Image by pch.vector on Freepik https://www.freepik.com/free-vector/customer-with-shopping-bag-paying-purchase-cash-register-cashier-woman-talk-bubble-flat-illustration_11235315.htm

C. Verification of upholding Relevant Legal Frameworks

Verification of relevant legal frameworks: 

The ethics committee shall verify that the use of the AAA system is compliant with all relevant European Union and national law on fundamental rights. This shall also take into consideration vulnerable groups that may be impacted differently to other individuals by the AAA system to ensure nondiscrimination within the AAA system in accordance with the European and national nondiscrimination laws. It is important to highlight that vulnerable groups may have specific rights as enshrined in the EU Charter of Fundamental Rights and also in national and international laws (i.e. the United Nations Convention on the Rights of the Child) which require consideration of the individual’s vulnerabilities and the provision of such protection and care necessary for their well-being.

The ethics committee should also consider individuals’ fundamental right to a high level of environmental protection enshrined within the EU Charter of Fundamental Rights. European Union and national policies should also be considered when assessing the severity of the harm that a AAA system can cause, including in relation to the health and safety of individuals.

 

“The ethics committee shall verify that the use of the AAA system is compliant with all relevant European Union and national law”

D. Log (in the risk log) reasonable foreseeable impact and/or harms on natural persons, especially at-risk Protected Categories and/or Vulnerable Groups using Diverse Input and Multi Stakeholder Feedback from human risk assessors

Identifying foreseeable impact(s) and defining seriousness:

The ethics committee shall, when putting the AAA system into use; identify any foreseeable impact on fundamental rights and specific risks of harm likely to impact vulnerable groups. They shall specify if the different categories of individuals and groups are likely to be impacted the same, or if this could differ for each, and the reasons why. It is mandatory for large organisations, and voluntary for SMEs to notify national supervisory authority, relevant stakeholders and involve representatives of the individuals or vulnerable groups (i.e. equality bodies, etc.) likely to be affected by the AAA system. They shall allow a period of six weeks for bodies to respond and document their input into the impact assessment.

 

The ethics committee should consider relationships between the users of the AAA system and those that may be impacted by the AAA system. Establish any possible conflicts between the groups, and any risks/issues that could lead to inputs or outputs being affected in a way that changes the impact of the AAA system on individuals (i.e. teachers having bias on inputs/outputs in favour of their pupils, or police officers having bias on inputs/outputs in a way that negatively impacts suspects, etc.). 

 

Step 1) Identify any applicable individuals Fundamental Right(s) that may be impacted directly or in-directly by the AAA system. 

Step 2) Identify any vulnerable groups whose Fundamental Rights may be affected differently.

Step 3) Outline the actual or potential impact on individuals’ fundamental rights. This involves assessing how the impact might differ for vulnerable groups and based on relationship biases between users and those impacted by the AAA system. It’s important to evaluate each fundamental right against each group separately, as different groups may experience varying impacts and severities.

 

Step 4a) Identify the severity of any interference of the fundamental rights per vulnerable group identified. Risks should be graded by severity of impact, (i.e. Serious, High, Medium, Low, None).

 

Step 4b) Identify the likelihood of each of the identified risks to Fundamental Rights for each identified vulnerable group. Likelihood should be graded by likelihood to occur (i.e. Persistent, Likely, Possible, Unlikely, Rare).

 

Step 4c) Combine the scores to determine the overall risk to individuals. Overall risk should be determined by Severity x Likelihood and graded by seriousness (i.e. High Risk, Medium Risk, Low Risk).

 

For further guidance and information see the ForHumanity Risk Management Body of Knowledge and also the ForHumanity Systemic Societal Impact Analysis (SSIA). If a AAA system affects or may affect a fundamental right, this does not necessarily mean that it cannot be used. However, it does mean that additional requirements and/or mitigations must be implemented to reduce the risk (Section F).

 

E. Log (in the risk log) negative impacts to the environment using Diverse Input and Multi Stakeholder Feedback from human risk assessors

Identifying foreseeable impact(s) and defining seriousness

The ethics committee shall also specifically consider the reasonably foreseeable adverse impact the AAA system will or may have on the environment (Article 37 of the Fundamental Rights). They shall ensure the AAA system commits a high level of environmental protection with a commitment to improving the quality of the environment and must be integrated into the organisation policies. This may include climate change, biodiversity, water protection, air and noise pollution, waste management, integrated pollution prevention and control, integrated product policy and environmental liability (i.e. processing power and energy consumption used by data centres to train and run the AAA system). It is mandatory for large organisations, and voluntary for SMEs to consult with relevant and diverse stakeholders. They shall document how and why they were chosen and when and how they will be consulted.

 

The ethics committee should document any positive impacts the AAA system has on the environment and include any environmentally positive or sustainable decisions and/or changes made.

 

Step 1) Identify any environmental impact that may be directly or in-directly caused by the AAA system.

 

Step 2) Specify what is the actual or potential impact on the environment.

Step 3a) Identify severity of any impact on the environment. Risks should be graded by severity of impact, (i.e. Serious, High, Medium, Low, None).

Step 3b) Identify the likelihood of each of the identified risks to the environment. Likelihood should be graded by likelihood to occur (i.e. Persistent, Likely, Possible, Unlikely, Rare).

 

Step 3c) Combine scores to determine the overall risk to the environment. Overall risk should be determined by Severity x Likelihood and graded by seriousness (i.e. High Risk, Medium Risk, Low Risk).

 

If a AAA system affects or threatens to affect the environment, then additional requirements and/or mitigations must be implemented to reduce the risk (Section F).

 

Image by pch.vector on Freepik https://www.freepik.com/free-vector/man-women-protecting-plant-globe-isolated-flat-vector-illustration-cartoon-people-saving-earth-nature-world-conservation-eco-science-environment_10173154.htm

 

 

 

F. Identification of controls, risk treatments and mitigation for logged negative impacts to fundamental rights

Mitigating identified risks:

The ethics committee shall create a detailed plan on how the organisation can mitigate any actual or potential harms and/or negative impacts on an individual’s fundamental rights. They shall identify the measures and mitigations the organisation can take in order to mitigate the identified risks to individuals (specified in D). It is important to consider that the mitigations required may be different for each group of individuals, particularly marginalised persons or vulnerable groups.

 

Step 1) Identify the parameters to determine the level of acceptable risk.

 

Step 2) For risks that exceed the threshold for acceptable risk, the ethics committee shall Identify possible measures and mitigations that could be implemented in order to reduce the risk to the individuals impacted by the AAA system. Typically, we are looking to either eliminate the risk, reduce the risk, transfer the risk or accept the risk.

 

Step 3) Identify the residual risk that will remain after the implementation of the measures and mitigations proposed and that it results in a reasonable balance between the objectives pursued and the fundamental rights that may be infringed.

 

Step 4a) Identify the severity of any residual risks to the fundamental rights per vulnerable group identified. Risks should be graded by severity of impact, (i.e. Serious, High, Medium, Low, None).

Step 4b) Identify the likelihood of any residual risks to Fundamental Rights for each identified vulnerable group. Likelihood should be graded by likelihood to occur (i.e. Persistent, Likely, Possible, Unlikely, Rare).

 

Step 4c) Combine the scores to determine the overall residual risk to individuals. Overall risk should be determined by Severity x Likelihood and graded by seriousness (i.e. High, Medium, Low Risk).

 

Step 5) Assess and decide whether to implement the proposed measures and mitigations (i.e. based on cost, time, effort to implement the measures versus the potential reduction of the risk) and the reasons for implementing or not implementing the measures. 

 

Step 6) Implement any agreed measures and mitigations into a practice.

 

Once all the proposed mitigations have been introduced, the ethics committee shall establish if risks to individuals have been sufficiently reduced to enable the AAA system to be put into use.

 

If the risks to individuals cannot be mitigated, and/or the organisation have reasons to consider that the use in accordance with the instructions from the provider may result in the AI system presenting a risk within the meaning of Article 65(1), they shall, without undue delay, first inform the provider or distributor and relevant national supervisory authorities and shall refrain from putting the high-risk AAA system into use.

 

 

G. Description of the governance and monitoring system for controls, risk treatments and mitigations 

Monitoring controls, risk treatments and mitigations:

The ethics committee shall draft the documentation outlining the organisational governance framework. This shall detail both the operational function of the governance, oversight, and accountability system, and include an overview of the duly designated team of experts trained in understanding specific multi-disciplinary risks associated with AAA systems including (but not limited to) risks to individuals rights and freedoms, privacy-by-design and data protections.

 

The committee shall ensure, there is a monitoring system in place to monitor the controls, risk treatments and mitigations to ensure they remain effective in reducing the risk to individuals. They shall also ensure there are effective procedures for evaluating the effectiveness of the controls, risk treatments and mitigations that are in place. This is important because the AAA system, for which they are conducting an Fundamental Rights Impact Assessment (FRIA) on is by nature a ‘high-risk’ system prior to the mitigations being implemented, it is therefore important that their effectiveness is monitored.

 

They shall also evaluate and document the human oversight measures in place, what determines when human oversight happens (i.e timely spot checks or alerts if a threshold is exceeded, etc.) and what that human oversight actually is and does. It should include steps (where applicable) of how issues identified by the ‘human’ are remediated to prevent further similar issues.

 

They shall ensure a process is in place for receiving, handling logging and monitoring of any complaints received. The organisation shall have a complaints procedure that includes how people will complain, who individuals complain to, and the escalation process. The ethics committee shall also demonstrate considerations on how complaints are received from vulnerable groups are regarding accessibility issues and also considerations when triaging complaints on the potential impact to the individual (i.e. different categories of individuals and groups are likely to be impacted differently and what could be a minor complaint for an individual, it could be an emergency situation for a individual belonging to a vulnerable group).

 

The complaints procedure shall detail the steps that are taken to achieve a resolution to complaints and specifically how the redress given for individuals is determined, and if this differs for vulnerable groups.

 

Image by pch.vector on Freepik https://www.freepik.com/free-vector/tiny-people-standing-near-prohibited-gesture-isolated-flat-illustration_11235950.htm

 

 

 

 

H. Establish metrics, measurements, and thresholds for governance and monitoring of (G)

 

Establishing the thresholds of the monitoring:

The ethics committee shall not only establish sufficient metrics, measurements, and thresholds to continually monitor the ongoing effectiveness of implemented risk treatments and mitigations in reducing the risk of the AAA system and its impact on individuals’ fundamental rights, but also define the criteria that trigger human oversight of the AAA System (i.e., the thresholds for alerting human intervention). Additionally, the committee shall set up metrics, measurements, and thresholds for monitoring the resolution of complaints.

 

The ethics committee shall assess if safety thresholds should vary for diverse individual categories, including vulnerable groups. Different categories and groups may experience varying impacts, where what’s safe for one might be risky for a vulnerable group member.

 

The ethics committee shall establish a procedure if the defined thresholds are surpassed to a degree where the organisation has reasons to consider that the use in accordance with the instructions from the provider may result in the AI system presenting a risk within the meaning of Article 65(1). They will first inform the provider, and then the importer or distributor, and relevant national supervisory authorities when they have identified any serious incident or malfunction as defined in Article 62 and halt AI system usage. Organisations may also consider adding the incident to the AI Incidents Database as a method to notify individuals. 

 

I. Establish a frequency for reassessment of any impact to fundamental rights

Establishing the frequency for reassessment:

The requirement to conduct a Fundamental Rights Impact Assessment (FRIA) is applicable when the high-risk AAA system is first put to use. However, it is important for organisations to consistently monitor the potential risks that could affect individuals’ fundamental rights.

 

The ethics committee shall establish a reassessment frequency for the risks and mitigations identified in the Fundamental Rights Impact Assessments (FRIA). This frequency will vary based on factors such as the type of AAA system and the potential harms and impacts on individuals. In cases where the FRIA identifies significant risks to individuals’ fundamental rights, the reassessment should occur more frequently, with the frequency determined by the severity of the identified risks.

 

In situations that resemble previously assessed cases, organisations may refer back to earlier conducted Fundamental Rights Impact Assessments (FRIA) or utilise existing assessments carried out by service providers. Some organisations referred to in Article 51 (i.e. public authorities or union institutions) shall publish a summary of the results of the Fundamental Rights Impact Assessments (FRIA) as part of the registration of use pursuant to their obligation under Article 51(2).