November 28, 2023

Mastering AI Policies: A Framework for Institutional Alignment

“The delicate issue will be to form policy that doesn’t curb innovation, but does help stabilize approaches to a rapidly moving, highly disruptive technology.” 

-Sandra Jordan, Ph.D., Chief of Staff/Vice President, SACSCOC

Access the Anthology AI Policy Framework

If you are in higher education, it is impossible to ignore the buzz surrounding generative artificial intelligence (AI). Opinions vary and keeping up seems like a hard task. However, one thing is for certain: it is time to develop strategic and forward-thinking policies that steer the ethical and effective incorporation of AI across college and university campuses.

Expanding the Scope of AI Policies: Beyond Teaching and Learning

Current/existing AI policies in higher education are related to a specific academic practice or discipline and tied to teaching and learning, which makes sense due to the nature of generative AI itself. Academic integrity is critical to the success of any institution, and generative AI has now severely tested that integrity. Because generative AI is used in both administrative and operational activities, there is also interest and a need to develop policies around it. Furthermore, piecemeal teaching and learning policies that vary across departments can cause confusion if a broader, well-understood policy is not in place.

“It is not difficult to imagine virtually every aspect of college and university life influenced by developments in the field of generative AI. It is incumbent on IHEs to develop policies that are consistent across the institution, that provide the freedom to leverage the known—and yet known—benefits of AI while protecting the mission and integrity of the institution from possible deleterious consequences.” 

- Dr. Michael Moore, Vice President for Academic Affairs, University of Arkansas System

However, the potential impact of generative AI goes far beyond the teaching and learning core of the institution. Administrative work—particularly “administrivia”—can be offloaded to AI tools, and some, but not all, communications tasks can be supported by AI chatbots and large language models like ChatGPT and Bard. Having students engage with a chatbot to answer frequently asked questions is now a customary practice, but being able to contact a human when needed is critical so that students feel fully supported.

Coordinating and Aligning Efforts Across Campus

Does the existence of generative AI tools in admissions negate the importance of the application essay? To what extent should AI be used to support accessibility in administrative, student services, and teaching and learning functions? Institutions will need to make decisions on these types of issues at the policy and practice levels.

Many AI policies are currently in development or have already been implemented. While this is a good thing, what happens when institutions develop multiple AI policies across an institution that are not coordinated? The institution is then confronted with potential policy conflicts, security and ethical concerns, and confusion.

AI can have an impact on the entire institution, from academics to governance and administration to operational processes. Because of this, it is important to have an institutional policy that “defines” its overall stance on AI, and a process to follow when adapting that policy to fit individual colleges, departments, administrative units, and operations. Institutions must avoid having multiple policies governing the same issue, especially if those policies are not aligned. Once an institutional policy is developed that is broad enough to allow for adaptations, then a process should be put in place regarding who is permitted to draft such department-specific policies and what approvals are needed prior to implementation.

“At JCU we are working towards an institutional AI policy to ensure that we have a whole-of-institution set of principles that all users can refer to and be guided by as they consider the application of AI within their respective areas.”  

-Associate Professor Andrea Lynch, Dean, Centre for Education and Enhancement, James Cook University, Australia 

A critical component of creating a strong institutional policy is obtaining input from all stakeholders, including students. This keeps everyone on the same page, helps to avoid conflicting policies, reduces risk, and keeps units and departments from creating spurious policies that do not align with the institutional stance on generative AI. Regional accreditors are also paying attention to AI, and discussions on large language models and other generative AI are topics being addressed at their annual meetings.

For example, at the SACSCOC conference this December (2023), the organizers are hosting a “Beyond ChatGPT” panel to highlight the innovative curricula and research initiatives happening at their institutions. One would assume that there would be guidance on the use of AI on campus at some point because it could impact academic integrity. However, institutions should not remain idle in anticipation of such guidance. Waiting for a structured policy framework to emerge may leave the institution vulnerable to many risks and complications.

“As educators committed to preparing our students for the road ahead, we’ll absolutely need to do the good work of helping them learn about, with, and beyond AI.” 

-Dr. Mark David Milliron, President, National University 

Establishing an AI Policy Framework

To help address the need to have better coordinated AI policies at institutions, the Anthology Education and Research Center has developed a suggested framework to help institutions develop an over-arching, institutional AI policy as well as department-specific policies that can be individualized for various areas of campus. In the framework, we highlight components that can be incorporated into an institutional AI policy – you might consider it a checklist of sorts.

It should be noted that the framework is intended to provide general guidelines institutions should consider when adopting an AI policy. However, each institution should modify the framework to reflect its unique vision, mission, and goals.

AI Governance in Action: An Institutional Framework for Success

The Anthology Trustworthy AI Principles are incorporated into each of the categories listed below in our institutional framework. It takes each of these steps and provides the questions and discussions required to complete each one:

  • Stakeholder Identification and Meetings
  • Define Institutional Position on AI
  • Understand Existing Policy Environment
  • Policy Development
  • Implementation of Policy

The institutional policy lays the groundwork and defines the scope and scale of the school’s AI position. Additional policies defined and customized to specific departments, disciplines, functions, or situations will be required beneath and at the departmental or unit level. These department-specific policies should be developed in the same manner as the institutional policy, with appropriate stakeholders represented, such as administrators (for alignment with institutional policy), faculty, staff, and students (as appropriate for the college/school, unit, or department).

Whether the policy is institutional or departmental, it requires a communication plan to keep faculty and staff informed, a timeline and strategy for implementation, and an agreed-upon set of consequences for non-compliance. In addition, AI policies should be reviewed regularly to ensure they align with the most recent research, developments, and proven best practices. Because AI is a disruptive technology, the institution should be flexible and willing to make policy adjustments as needed in a systematic manner, ensuring that changes to AI policies at the department level remain consistent with the institutional AI policy.

“We must regulate the misuse of generative AI by students and academics, since failure to do so creates distance with the biggest skeptics of this new technology when evidence of fraud inevitably comes out. Therefore, we must regulate, but at the same time allow ourselves space for the development and correct use of new technologies within our universities.” 

-Sergio Mena, President, Universidad Gabriela Mistral, Chile

The Anthology AI Policy Framework suggests some steps for consideration and questions to be answered by stakeholders in order to facilitate the adoption of an institutional wide AI policy.

Access the Anthology AI Policy Framework

AI Policy Framework Resources 

The following articles and documents are a selective list of resources that may be helpful for institutions:  

Anthology, AI, Academic Integrity, and Authentic Assessment: An Ethical Path Forward for Education (Anthology, 2023)  

Anthology, AI Trustworthy Principles (Anthology, 2023)  

Burke, Lilah, Should colleges use AI in admissions? (Higher Ed Dive, 2023)  

Chan, Cecilia Ka Yuk, A comprehensive AI policy education framework for university teaching and learning (International Journal of Educational Technology in Higher Education, 2023)  

Chapman University, Guidelines Relating to Data Privacy and Security When Using Generative Artificial Intelligence Tools (Chapman University, 2023)  

Drozdowski, Mark J., EdD, 5 Ways Artificial Intelligence Will Transform Higher Education (Best Colleges, 2023)  

European Commission, A European approach to artificial intelligence (EU, 2023)  

Knox, Dan; Pardos, Zach, Toward Ethical and Equitable AI in Higher Education (Inside Higher Ed, 2022)  

Louder, Justin, Adapting Curriculum for the AI-Driven Economy (Anthology, 2023)  

McMurtrie, Beth, What Will Determine AI’s Impact on College Teaching?  5 Signs to Watch (Chronicle of Higher Education, 2023)  

Miao, Fengchun; Holmes, Wayne; Ronghaui, Huang; Hui, Zhang, AI and education: guidance for policy-makers (UNESCO, 2021)  

National Institute of Standards and Technology (NIST), AI Risk Management Framework (NIST, 2023)  

OECD, OECD AI Principles overview (OECD AI Policy Observatory, 2019)  

Oregon State University, Guidance for online course development and the use of artificial intelligence tools (Oregon State University, 2023)  

Prasad, Pankaj; Byrne, Padraig; Siegfried, Market Guide for AIOps Platforms (Gartner, 2022)  

Syllabi Policies for AI Generative Tools (Example policies for teaching and learning)  

Stanberry, Martin; Bernard, Jack; Storch, Joseph, In an AI World, Let Disability Access Lead the Way (Inside Higher Ed, 2023)  

WICHE Cooperative for Educational Technologies (WCET), AI in Higher Education Resources. 

Willsea, Mallory, Embrace AI To Boost Your Enrollment Marketing Team’s Productivity (Inside Higher Ed, 2023)