Ever since Elon Musk’s OpenAI released ChatGPT, many educational institutions have rushed into panic.
Professors across the country began clarifying in their syllabi and course materials policies regarding the use of it and other artificial intelligence tools.
The College Board had to release a statement prohibiting its usage in any student-submitted work.
ChatGPT itself is one of the most extensive implementations of a new type of artificial intelligence, called a large language model (LLM).
Internally, after looking at the beginning of a “prompt” – such as a question or request from a user – it tries to guess each next word. Old LLMs would usually only work for filling in the blanks of a sentence.
OpenAI’s previous models, GPT-2 and GPT-3, could fill in human responses fairly well, but they couldn’t act conversationally with users.
To make the “chat” concept work, OpenAI used human reinforcement – volunteers were tasked with marking which responses made the model a better conversational partner.
Due to the extensive testing, ChatGPT reached a new level of competency – the ability to write long statements about given topics, and the knowledge on a wide variety.
Now if you go into ChatGPT’s web interface, you can ask it to write anything for you, no matter how niche the subject – and it will write according to all your parameters, including length and essay structure.
This is the main reason of concern educators had regarding ChatGPT.
Essays have long been the cornerstone of college coursework; requiring a student to write helps them demonstrate a deeper level of knowledge about a topic than a simple multiple-choice exam.
With ChatGPT however, students no longer need to know the content they’re writing about – as it has in-depth knowledge on most subjects – and do not need to actually put in the work to write the essay.
So why, then, isn’t this an actual problem?
It’s quite simple; ChatGPT’s work is easy to spot.
For one, a student’s writing style is extremely distinct; most educators have learned over time to naturally spot plagiarism, especially when essays are fully written by someone else or by another service, by just comparing writing styles.
ChatGPT doesn’t have the capability to match a student’s writing style, even by providing previous work samples to it, so this will remain an effective method.
There’s also more inherent attributes about how LLMs generate text that are easy to spot. Because they do their best to guess what the next word should be, articles like “the” appear more frequently than in human writing.
While it might not be specifically noticeable to everyone, many after playing around with ChatGPT do pick up on some slight tones within its work that indicates it’s AI-generated content.
OpenAI has also now released a detector for ChatGPT-generated content, using it requires almost no work on the part of instructors, and future work could integrate it into existing platforms like Canvas and Turnitin.
Overall, ChatGPT doesn’t currently present a risk to collegiate academic integrity.
Instead of panicking over it, faculty should just make sure they have systems in place to catch those who try and skip on their work via ChatGPT, as well as figure out how to utilize this potential new opportunity to better help their students.