U of T's Schwartz Reisman Institute and AI Global to develop global certification mark for trustworthy AI
The products and services we use in our daily lives have to abide by safety and security standards, from car airbags to construction materials. But no such broad, internationally agreed-upon standards exist for artificial intelligence.
And yet, AI tools and technologies are steadily being integrated into all aspects of our lives. AI鈥檚 potential benefits to humanity, such as improving health-care delivery or tackling climate change, are immense. But potential harms caused by AI tools 鈥揻rom algorithmic bias and labour displacement to risks associated with autonomous vehicles and weapons 鈥 risk leading to a lack of trust in AI technologies.
To tackle these problems, a new partnership between , and the (SRI) at the University of Toronto will create a globally recognized certification mark for the responsible and trusted use of AI systems.
In collaboration with the World Economic Forum鈥檚 platform, the partnership will convene industry actors, policy-makers, civil society representatives and academics to build a universally recognized framework that validates AI tools and technologies as responsible, trustworthy, ethical and fair.
鈥淚n addition to our fundamental multidisciplinary research, SRI also aims to craft practical, implementable and globally appealing solutions to the challenge of building responsible and inclusive AI,鈥 says Gillian Hadfield, the director of the Schwartz Reisman Institute for Technology and Society and a professor at U of T鈥檚 Faculty of Law and Rotman School of Management.
Hadfield鈥檚 current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies. She also works on 鈥渢he alignment problem鈥: a term that refers to the ideal that an AI鈥檚 actions should align with what humans would want.
鈥淥ne of the reasons why we鈥檙e excited to partner with AI Global is that they鈥檙e focused on building tangible, usable tools to support the responsible development of AI,鈥 says Hadfield. 鈥淎nd we firmly believe that鈥檚 what the world currently needs. The need for clear, objective regulations has never been more urgent.鈥
A wide variety of initiatives have already sought to drive AI development and deployment in the right directions: governments around the world have established advisory councils or created rules for singular AI tools in certain contexts; NGOs and think tanks have published sets of principles and best practices; and private companies such as Google have released official statements about the ways in which their AI practices pledge to be 鈥渞esponsible.鈥
But none of these initiatives amounts to enforceable and measurable regulations. Furthermore, there isn鈥檛 always agreement between regions, sectors and stakeholders about what, exactly, is 鈥渞esponsible鈥 and why.
鈥淲e鈥檝e heard a growing group of voices in recent years sharing insights on how AI systems should be built and managed,鈥 says Ashley Casovan, executive director of AI Global. 鈥淏ut the kinds of high-level, non-binding principles we鈥檝e seen proliferating are simply not enough given the scope, scale and complexity of these tools. It鈥檚 imperative that we take the next step now, pulling these concepts out of theory and into action.鈥
A global certification mark like the one being built by SRI and AI Global is the next step.
鈥淩ecognizing the importance of an independent and authoritative certification program working across sectors and across regions, this initiative aims to be the first third-party accredited certification for AI systems,鈥 says Hadfield.
How will it work? First, experts will examine the wealth of existing research and calls for global reform in order to define the key requirements for a global AI certification program. Next, they鈥檒l design a framework to support the validation of the program by a respected accreditation body or bodies. They鈥檒l also design a framework for independent auditors to assess AI systems against the requirements for global certification. Finally, the framework will be applied to various use cases across sectors and regions.
鈥淎I should empower people and businesses, impacting customers and society fairly, while allowing companies to engender trust and scale AI with confidence,鈥 says Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum. 鈥淚ndustry actors that receive certification would be able to show that they have implemented credible, independently-validated and tested processes for the responsible use of AI systems.鈥
The project will unfold over a 12- to 18-month timeline, with two global workshops scheduled for May and November of 2021. A will be held on Dec. 9, 2020.