New AI Policy Framework from Anthology empowers higher education to balance risks and rewards of AI

DUBAI /GULF TIME

Anthology, the leading provider of education solutions that support the entire learner lifecycle, announced the availability of its AI Policy Framework, a resource to support higher education institutions who are interested in developing and adopting specific policies and programs on the ethical use of AI within their institution. The AI Framework provides guidance on how to evaluate its broad implications, how to draft and implement policies, as well as a model for establishing governance.
Anthology’s AI Policy Framework is built upon seven principles: fairness; reliability; humans in control; transparency and explainability; privacy, security, and safety; value alignment; and accountability. These principles are rooted in international standards, aligning with the NIST AI Risk Management Framework, the EU AI Act, and the OECD Principles.
“Higher education faced a transformative moment as generative AI exploded on the scene with ChatGPT. As a result, many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power,” said Bruce Dahlgren, CEO of Anthology. “We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards.”
AI use has implications across the entire institution, from academics to governance and administration to operational processes. Because of this, the AI Policy Framework stresses the importance of having an institutional policy that defines its overall stance on AI, and a process to follow when adapting that policy to fit individual colleges, departments, administrative units, and operations. Anthology’s AI Policy Framework considers governance, teaching and learning, operational and administrative aspects, copyright and intellectual property, research,
academic dishonesty, policy updates, and consequences of non-compliance.
A recent survey asked university leaders how AI could impact higher education and university operations. While respondents cited concerns about ethical use of AI (34%), the value of AI in providing personalised learning experiences was nearly equal at 33%. Almost 1 in 5 institutional leaders believe AI can help develop enrollment or admission campaigns.

Leave a Reply

Send this to a friend