Developing a ChatGPT Policy for Your Institution

Ian Hartley


May 06, 2023

main image description

Artificial Intelligence (AI) technologies, such as ChatGPT, have emerged as transformative tools with immense potential to revolutionize a variety of fields including education. ChatGPT, powered by advanced language models, offers new avenues for interactive learning, personalized instruction, and educational support. However, as educational institutions increasingly embrace AI, it becomes crucial to establish comprehensive policies that address the ethical implications, and design frameworks that preserve learning and academic integrity. Without appropriate policies, these new ultra-powerful AI language models, like ChatGPT, can effectively be used to plagiarize and complete assignments in a fraction of the time – harming the overall educational experience. Thus, tools like ChatGPT are a double-edged sword that educators must work with to ensure that their institution harnesses the benefits of AI and has solutions implemented to prevent potential negative effects.

This article explores the essential considerations and steps involved in designing an effective ChatGPT policy for educational institutions and offers a review of policies being created by educators, and even those in the private sector.

Why Have a ChatGPT Policy?

As mentioned, ChatGPT is a double-edged sword for education, instantly providing students with the ability to contextualize a difficult concept and build on it, and simultaneously enabling them to churn out a 1000-word essay on Moby Dick with citations (albeit, questionable citations), that will require just a few minutes of editing to seem human written. Many companies have chosen to adopt policies about the use of ChatGPT because savvy employees have been leveraging ChatGPT to take on multiple jobs and using ChatGPT to complete their work for them (1). The issue is so rampant that the Society for Human Resources Management, abbreviated as SHRM, which is the industry-leader in HR management, has commented on this phenomenon and suggested solutions to mitigate this issue (1).

Much the same has been seen with students of all levels in education, with students from K12 to higher education using ChatGPT to complete assignments in record time. Often, there are misconceptions by Faculty about the use of ChatGPT by students, as Faculty often think that students are simply copying and pasting entire essays from ChatGPT with no editing. In reality, the problem is more nuanced, with students using ChatGPT to draft their entire essay and then spend a relatively small amount of time editing it, and with students using even more specialized essay writing tools like Jasper, which even further reduce the amount of editing that students must do. Because of the low quality of responses that come directly from ChatGPT with no editing, Faculty often assume that the students who are using ChatGPT to complete their assignments are the lowest performing students who are using ChatGPT to ‘cram’ ahead of their assignment. As shown in a 2017 study by Kristine Ottoway et al. from the American Physiological Society, the highest performing students are the most likely to cheat on exams to maintain their high achieving status (2). This is no different for written assignments, where high performing students are using tools like ChatGPT, Jasper, and more to assist them in writing their essays in a fraction of the time. Again, contrary to popular belief, it is these high-performing students that start their assignment ahead of time, and edit their essay before submission to make it seem authentically written by a human.

The reason why faculty often believe that the majority of students who are using ChatGPT to cheat is because of a logical fallacy called the Survivorship Bias, which many academics may already be familiar with. For those not familiar, the Survivorship Bias is where observational data is skewed by concentrating on data that passes a selection process, while overlooking those that did not. It was first described and made well known in World War II when an air force was deciding how to structurally reinforce planes, by examining damage on those returning from battle. One of the team members realized that they should be concentrating instead on the planes that did not return from battle – as whatever damage they sustained must have been critical damage.

test alt

The same is true for educators in detecting ChatGPT, who should not be erroneously assuming that the students they catch using ChatGPT are representative in quantity and type of the entire population of students that are using ChatGPT. Solving this problem is a matter of structuring an appropriate policy.

Sample ChatGPT Policy

The Society for Human Resource Management (SHRM), suggests that companies adopt a boilerplate sample ChatGPT policy that prevents them from using ChatGPT in their job except for the express purpose of learning more about the tool or if they are required to use it for their job (1). Re-writing this policy for education, might look something like this:

“Students are not allowed to use ChatGPT and other third-party generative AI services to complete assignments. This includes using such services to generate computer code or any kind of academic communication, even as a starting point, or during an editing process.”

The above is an example of what many schools would like to enforce as a ChatGPT policy. It is worth noting, however, that there are many schools that would like students to engage with and use ChatGPT more closely on their assignments. While the above policy is a representative example, it can and should be customized and expanded upon to meet the goals of the institution.

How to Enforce ChatGPT Policies

As with any good law or policy, enforceability is a critical component of success. Perfectly written policies that are unenforceable are unable to fulfill their aims. The ‘Honor System’ is what is used by many schools when policies are difficult to enforce or unenforceable. However, with the advent of extremely powerful AI systems, the incentive for students to break an Honor System becomes extreme. By not using ChatGPT or AI tools, students are not only missing out on the benefit from the tools, but perceive that they are at a disadvantage compared to their peers who they perceive as likely using ChatGPT. Thus, it is unlikely that ‘Honor System’ type enforcement works for policies related to ChatGPT and other AI systems.

Likewise, the use of plagiarism ‘detection’ tools like GPTZero or Turnitin’s AI Checker, are unreliable because they are unenforceable. These tools only provide a likelihood score that a student used AI to write their essay, and this cannot be fairly relied upon for an academic judgement. Already, instructors who are receiving submissions which even clearly look like they’ve been completely written by ChatGPT are struggling to determine how to respond, because they do not have any evidence that the content was plagiarized. Students are aware of this phenomenon as well, so they are incentivized to continue using ChatGPT because they know it is unlikely they will be disciplined for plagiarism.

The solution to enforcing ChatGPT Policies is by implementing ‘prevention’ tools rather than ‘detection’ tools. Authoriginal is a prevention tool, which completely prevents all forms of plagiarism, including AI plagiarism and purchased essays. Authoriginal is a simple LTI app and browser extension that flags and takes a screenshot if students use AI tools while writing an assignment, in addition to a variety of other flag triggers that prevent all forms of plagiarism. Using a system like Authoriginal which prevents the use of ChatGPT, as an assignment is being written, is the only solution which does not have loopholes, and provides certainty for instructors about whether tools like ChatGPT were used. Thus, by implementing Authoriginal, institutions can enforce a ChatGPT policy that aligns with their goals.

If you’re interested in learning more about ChatGPT, please reach out to us at to set up a demo.


1. "How to Create the Best ChatGPT Policies." Society for Human Resource Management (SHRM),

2. Kristine Ottoway, et al., American Physiological Society, “Cheating After the Test,: Who Does it and How Often?”,

Get a Demo of Authoriginal

Contact us