Blog: Five principles for citizen-friendly artificial intelligence – The Mandarin
Forty-two countries, including Australia, formally adopted the first set of intergovernmental policy guidelines on artificial intelligence on Wednesday.
The five principles aim to guide governments, organisations and individuals in designing and running AI systems in a way that puts people’s best interests first and ensures designers and operators are held accountable for their proper functioning.
The principles state:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards — for example, enabling human intervention where necessary — to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
- AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
The OECD’s 36 member countries, plus six others, agreed to the guidelines at a major annual meeting in Paris, which is focusing this year on harnessing the digital transition for sustainable development.
“Artificial intelligence is revolutionising the way we live and work, and offering extraordinary benefits for our societies and economies,” said OECD Secretary-General Angel Gurría.
“Yet, it raises new challenges and is also fuelling anxieties and ethical concerns. This puts the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount.
“These principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.”
The development of the principles was guided by an expert group of more than 50 members from governments, academia, business, civil society, international bodies, the tech community and trade unions.
Additionally, the OECD recommends governments:
- Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
- Foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge.
- Create a policy environment that will open the way to deployment of trustworthy AI systems.
- Equip people with the skills for AI and support workers to ensure a fair transition.
- Co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI.
The Australian government recently released a discussion paper to inform the development of an ethics framework for AI.