- AI should serve the interests of individuals and the planet by promoting inclusive growth, sustainable development and well-being.
- AI systems should be designed to respect the rule of law, human rights, democratic values and diversity, with appropriate safeguards, such as allowing for human intervention when necessary, in order to move towards a just and equitable society.
- Transparency and responsible disclosure of information related to AI systems should be ensured to ensure that individuals know when they interact with such systems and can challenge the results.
- AI systems should be robust, safe and secure throughout their lifecycle; any associated risks should be assessed and managed on an ongoing basis.
- Organizations and individuals responsible for developing, deploying or operating AI systems should be responsible for their proper functioning in accordance with the above principles.
- Facilitate public and private investment in research and development to stimulate innovation in trustworthy AI.
- Promote the development of accessible AI ecosystems, including digital technologies and infrastructures, as well as mechanisms for sharing data and knowledge.
- Build a framework for action that paves the way for the deployment of trusted AI systems.
- Equip individuals with the skills they need in the AI field and ensure a just transition for workers.
- Foster transnational and cross-sectoral cooperation to share information, set standards and collaborate on a responsible approach to trusted AI.
