AI-powered education platforms are proliferating but there’s a lot of confusion and hype in the market.
In order to instil public confidence in AI’s education potential, the industry needs to adopt common benchmarks and standards to ensure the safe and responsible use of AI in education.
Today, the users – the parents, the students, the instructors – have no sense of whether a tool is safe and effective, and they tend to be afraid of most AI-enabled tools.
To address the problem, Riiid, a leader in AI-powered education solutions, and DXtera Institute, a nonprofit membership organization that uses technology to lower barriers in education delivery, have formed a cross-sector alliance of companies, non-profit organizations, and education technology associations to work on an AI in Education benchmark initiative.
The initiative, launched in August, is focused on establishing benchmarks and standards in four critical categories – Safety (security, privacy), Accountability (defining stakeholder responsibilities), Fairness (equity, ethics, and lack of bias), and Efficacy (quantified improved learning outcomes). In a word, SAFE educational AI.
DXtera, a trusted nonprofit player, is managing the day-to-day work of the Alliance. They will be the fiscal agent and the contracting agent to hire staff and experts. The Alliance in the long run intends to become self-supporting through membership dues, sponsorships, and philanthropic support. Riiid, which has funded the foundation of the initiative, continues to play an active role in recruiting outside organizations as new members.
In the three months since the alliance was launched, it has grown from 20 members to more than 100 members, representing fifteen countries. Organizations involved include Carnegie Learning, ETS, GSV Ventures, the German Alliance for Education, EduCloud Alliance, and Digital Promise. The alliance has also aligned itself with UNESCO’s Broadband Commission for Sustainable Development, whose goal is to connect everyone in the world to the Internet.
The alliance expects to eventually hire paid experts to develop standards that could be tested and certified. It won’t be working in a vacuum, nor developing standards from scratch.
Underwriters Laboratory, the private certification company, is a member of the alliance and has independently developed a kind of rubric that they use for inspecting algorithms. UL, as it is known today, has participated in the safety analysis of many new technologies since it was founded in 1894.
Nearly every American product that uses electricity has the UL logo on it, which means that it has undergone rigorous testing to meet various standards.
The alliance intends to do something similar for AI education tools and platforms, eventually implementing a voluntary review process for such products that would give consumers confidence in the way that nutritional labels do on packaged food products today.
Stringent testing may also help determine whether products meet existing data privacy laws, such as General Data Protection Regulation guidelines in the European Union and data privacy laws in California.
The alliance hopes that school districts and other organizations would then use alliance certification to guide their purchase of AI-enabled education technology.
The alliance isn’t focused on the US market alone. It is engaged with people in Israel, Russia, and the EU EdTech Consortium, which represents all the EU countries, and Education Alliance Finland, among others. The German Alliance for Education brings to the table representatives from about 100 groups ranging from education ministries and companies to universities and schools.
AI has the potential to transform education, relieving teachers of administrative burdens and personalizing learning paths for students. But in order to realize that potential, we need recognized standards that everyone can trust. We’re calling on professionals from all levels of the education industry, from educational delivery agents, from users and from governments to get involved.