What is a trustworthy artificial intelligence system?
In April 2019, the European Commission published the Ethics Guidelines for Trustworthy Artificial Intelligence (hereinafter the Ethics Guidelines). This excellent document, which we strongly encourage you to read, has been prepared by a group of independent experts set up in June 2018.
The objective of this document is to establish the basis for a regulatory framework in the European Union (EU) to achieve a trusted ecosystem for the safe development of artificial intelligence (including artificial intelligence algorithms) in different sectors, in full respect of the values and rights of EU citizens, so that AI can be considered trustworthy.
According to the Ethics Guidelines, the trustworthiness of AI relies on three components that must be present throughout the entire lifecycle of the AI system:
- AI must be lawful, i.e., comply with all applicable laws and regulations.
- AI must be ethical, so as to ensure respect for ethical principles and values.
- AI must be robust, both technically and socially, as AI systems, even if well-intentioned, can cause accidental harm.
The Ethics Guidelines, which are addressed to all stakeholders (developers and general providers of AI systems; users of AI systems; the state and society as a whole), aim to provide more than just a list of ethical principles. They provide guidance on how to put these principles into practice in real life. The guidance set out by these Guidelines, which is what we are interested in, is developed at three levels of abstraction, ranging from Chapter I (the most abstract) to Chapter III (the most concrete).
Chapter I articulates the fundamental rights and a set of associated ethical principles that are crucial to apply in AI contexts (Foundations of Trustworthy AI). Chapter II lists the seven key requirements that AI systems should meet to make trusted AI a reality. This chapter proposes technical and non-technical methods that can contribute to their implementation (Realizing trustworthy AI). Finally, Chapter III includes an evaluation checklist for trustworthy AI that can help to put these seven requirements into practice (Evaluation trustworthy AI).
Foundations of trustworthy artificial intelligence
The foundations of trustworthy AI are based on four core ethical principles, all rooted in fundamental rights, which must be adhered to in order to ensure that AI systems are developed, deployed and used in a trustworthy manner.
AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, conditionate or lead humans. Instead, AI systems should be designed in ways that augment, complement and enhance people’s cognitive, social and cultural skills. AI systems should follow human-centered design principles, and leave ample opportunity for human choices. This implies ensuring human oversight and control over the work processes of AI systems.
AI systems should not cause harm (or aggravate existing harm) or otherwise harm humans. AI systems must be safe. It should be ensured that they cannot be put to malicious uses. Preventing harm also means taking into account the natural environment and all living beings.
The development, deployment and use of AI systems must be equitable. This means ensuring a fair and equal distribution of benefits and costs and ensuring that individuals and groups are not unfairly biased, discriminated against or stigmatized.
Explainability is crucial to get users to trust AI systems and to maintain that trust. Processes must be transparent. Decisions must be explainable. It is not always possible to explain why a model has generated a particular outcome or decision (or what combination of factors contributed to it). Such cases, which are called “black box” algorithms, require special attention. The degree of need for Explainability depends very much on the context and the severity of the consequences of an erroneous or inappropriate outcome (e.g., an AI system that generates unsound purchase recommendations will not raise excessive ethical concerns, unlike AI systems that assess whether a person convicted of a criminal offence should be granted parole).
Realization of a trustworthy AI
To ensure trustworthy AI compliance, seven requirements need to be constantly assessed and addressed throughout the lifecycle of AI systems. While all of these requirements are of equal importance, it is necessary to take into account the context and tensions that may arise between them when applying them in different domains and sectors. These requirements are listed below:
1) Human agency and oversight
AI systems should support the autonomy and decision-making of individuals, as prescribed by the principle of respect for human autonomy. This requires AI systems to act both as enablers of a democratic, prosperous and equitable society, supporting human action and promoting fundamental rights, as well as enabling human oversight.
2) Technical robustness and safety
This requirement is closely linked to the principle of prevention of harm. Technical robustness requires that AI systems are developed with a precautionary approach to risks, so that they always behave as expected and minimize unintended and unforeseen damage, and avoid causing unacceptable harm. This should also apply to potential changes in their operating environment or to the presence of other agents (human and man-made), which may interact with the system in a contentious manner. In addition, the physical and mental integrity of human beings should be ensured.
3) Privacy and data governance
Privacy is a fundamental right that is particularly affected by AI systems and is closely related to the principle of prevention of harm. Preventing harm to privacy also requires adequate data management, covering the quality and integrity of the data used, their relevance in contrast to the domain in which AI systems will be developed, their access protocols and the ability to process data without violating privacy.
This requirement is closely related to the principle of explainability, and includes transparency of the elements relevant to an AI system: the data, the system and the business models.
5) Diversity, non-discrimination and fairness.
Trustworthy AI requires ensuring inclusion and diversity throughout the lifecycle of AI systems. In addition to taking all stakeholders into account and ensuring their participation throughout the process, it is also necessary to ensure equal access through inclusive design processes, including equal treatment. This requirement is closely related to the principle of equity.
6) Social and environmental well-being
In line with the principles of equity and prevention of harm, society at large, other sentient beings and the environment also need to be considered as stakeholders throughout the AI lifecycle. It is recommended that sustainability and ecological responsibility of AI systems are promoted, and research into AI solutions to address issues of global concern, such as the Sustainable Development Goals, should be encouraged. Ideally, AI should be used for the benefit of all human beings, including future generations.
This requirement complements the previous ones and is closely related to the equity principle. It requires the establishment of mechanics to ensure responsibility and accountability for AI systems and their results, both before and after implementation.
Evaluation of trustworthy AI
The Ethics Guidelines for Trustworthy AI also provide a non-exhaustive checklist for assessing the trustworthiness of AI to implement trustworthy AI. This list applies in particular to AI systems that interact directly with users, and is primarily aimed at developers and deployers of AI systems, whether developed in-house or acquired from third parties.
In order for you to understand how an assessment is made to determine whether an AI system is trustworthy, starting from the Guidelines, we have designed our own assessment checklist that we are going to apply to a specific sector: the financial sector. We have chosen this sector because the use of artificial intelligence algorithms is very common in this industry and because direct interaction with users here is recurrent and daily.
That is why we have created FIABILITO, a chatbot that is installed on this website, with which you can engage in a friendly, direct writer conversation and, if you work in the financial sector as a developer or manager of an AI system, check in a preliminary way whether your AI system can be considered trustworthy or not.
For the creation of Fiabilito, in addition to the Ethics Guidelines, we have also taken into account the following documents:
- The White Paper on Artificial Intelligence – A European approach to excellence and trust, published on February 19, 2020 and prepared by the European Commission.
- The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment, published on July 17, 2020, prepared by the High-Level Expert Group on Artificial Intelligence, set up by the European Commission.
- The Proposal for a Regulation of the European Parliament and of the Council, laying down harmonized rules on Artificial Intelligence, prepared by the European Commission, and published on April 21, 2021, and its annexes.
- The Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence, prepared by the European Data Protection Board and the European Data Protection Supervisor, published on June 18, 2021.
- The Recommendation about ethics of Artificial Intelligence, approved by UNESCO on November 18, 2021.
- The document Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción, prepared by the Spanish Data Protection Agency, published in February 2020.
- The document Requisitos para Auditorías de Tratamientos que incluyan AI, prepared by the Spanish Data Protection Agency, published in January 2021.