An Ethical AI Q&A, Part One
The rise of artificial intelligence (AI) has brought with it incredible possibilities, but also trepidation and ethical implications regarding its use. As higher education institutions navigate what AI means for teaching and learning and begin to leverage it to enhance the student experience, doing so ethically is a top priority.
Anthology is committed to the safe and ethical use of AI and AI tools. For more insight into ethical AI best practices for higher education leaders, we asked our Compliance, Trustworthy AI, and Privacy Officer Stephan Geering for his take. In this Q&A-style post, he shares some practical and effective strategies institutions can use to establish ethical AI policies.
Q: Staying in control: What practices should institutions establish to ensure that they stay in control, and that there is human oversight of AI systems?
A: AI has enormous promise to transform education, but one of the biggest risks is that we hand over too much control to AI systems. Ensuring that humans stay in control and are accountable is a key pillar of our own Trustworthy AI Framework and of the AI Policy Framework we created for our customers.
But how can institutions implement this in practice? One important element is to establish a governance structure across the whole institution. This should include a dedicated senior lead for the responsible use of AI. Good governance with clear responsibility helps ensure that any planned uses of (generative) AI are identified and reviewed before they are implemented. Responsible technology partners like Anthology also give customers control over the rollout of AI features by requiring an opt-in for generative AI features. This lets institutions decide themselves, based on their own internal AI policies and risk appetite, whether to use such features.
Q: Transparency and inclusion: What ethical and legal practices should an institution follow to ensure that staff and students are informed and involved in the institution’s use of AI systems that impact them?
A: For the governance structure I just mentioned to work well, institutions should include diverse and multi-disciplinary input from across the institution. Not only when they decide on their initial AI policies and framework, but on an ongoing basis. Institutions should also consider having student and staff representation. We ourselves have implemented a Trustworthy AI Council to ensure regular, multi-disciplinary input on our approach to AI. We also work with our Artificial Intelligence Advisory Council to ensure customer involvement and feedback.
Keeping everyone informed is a challenge. What works well from our experience is a combination of a high-level statement on the approach to AI, detailed explanations on specific AI tools, and FAQs for staff that can be adapted regularly to reflect new developments. A key purpose of our AI transparency notes is to enable customers to inform their relevant teams and students and leverage this information for their own transparency efforts.
Q: Training and awareness: How can institutions make sure their staff and students understand the opportunities and risks when using AI?
A: Institutions need to train their staff on the opportunities and risks of AI and AI policies regularly. Not just through annual training, but also through regular awareness communication. Staff who not only understand the benefits of generative AI, but also the risks such as “hallucinations,” data privacy, and bias, will deploy and use generative AI features in a more targeted way and with the necessary caution. This is critical to reaping the benefits while minimizing the risks.
Q: Deciding on AI use cases: How can institutions identify use cases that maximize the opportunities of generative AI while minimizing the risks?
A: This question ties in with the previous question on training. Understanding how generative AI works and where it works well/not so well is absolutely vital to using generative AI where it has maximum impact but minimum risk. Increasingly, institutions need internal experts regarding both AI technology and the management of AI risk. This is not an easy feat. Internal role-based training, as well as external training and certifications for critical teams such as IT, legal/privacy, and security, can help institutions create and maintain the necessary specialist expertise to use generative AI wisely.
You can check out part two of this discussion now, and be sure to join us on our Ethical AI in Action World Tour, taking place in major cities across the globe this October, November, and December.
Stephan Geering
Stephan Geering is Anthology's compliance, trustworthy AI, and privacy officer. Stephan leads Anthology's global Compliance, Data Privacy, and Trustworthy AI Programs. He previously worked as Citigroup's EMEA & APAC chief privacy officer and as deputy data protection commissioner of one of the Swiss regional data protection authorities.