Chat-GPT is a large language model developed by OpenAI. Two characteristics make it exceptional: (i) the ability to generate "human-like" responses to user prompts and (ii) public accessibility. Early adopters have used Chat-GPT’s “free research preview” to accelerate or automate many tasks, from meal planning to software debugging.
However, large language models like Chat-GPT are not infallible. OpenAI warns that Chat-GPT “may occasionally generate incorrect information” and “produce harmful instructions or biased content” — vulnerabilities that malicious users can exploit and over-eager users may fail to recognize.  Stack Overflow, an online Q&A platform for programmers, had to ban Chat-GPT-generated content when users flooded the forums with believable but incorrect answers generated by the model.  Moderators posted the following justification for the ban:
WHILE THE ANSWER WHICH CHATGPT PRODUCES HAVE A HIGH RATE OF BEING INCORRECT, THEY TYPICALLY LOOK LIKE THEY MIGHT BE GOOD
One group seems particularly susceptible to abusing Chat-GPT. Students have clear incentives to use the model to shortcut their studies. After all, who wouldn’t want to work less and get better grades?
Concerned educators contend that Chat-GPT is a crutch, not a tool. In early January, New York City Public Schools (the largest school district in the U.S.) banned Chat-GPT from school devices and networks.  Several smaller districts have followed suit; Seattle Public Schools banned the model along with six other “cheating sites.” 
On the one hand, students could benefit from learning to use Chat-GPT. As technology advances, technological competency is becoming an increasingly valuable professional skill. Furthermore, learning about Chat-GPT’s limitations could help students practice critical thinking and a healthy skepticism toward untested technology. From this perspective, banning Chat-GPT in schools would be akin to banning Computer Science: counterproductive and old-fashioned.
On the other hand, students who use Chat-GPT as a replacement for learning will miss crucial learning opportunities. For example, students who use the model to automate their homework will neglect important, manual skills, such as research, critical thinking, and writing. Additionally, one might argue that students who submit Chat-GPT-generated content unfairly undercut other students’ original work by “plagiarizing” the model.
It’s difficult to say whether Chat-GPT’s benefits will outweigh the risks for students in the long term. Much will depend on how educators adapt to this new technology: Will schools develop methods to detect and regulate the use of large language models faster than students take advantage of them? We wait for the answer.
Read our blog post about the opportunities and risks Chat-GPT creates for professionals.
Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation.
Contact us if you have a problem to solve, a process to refine, or a question to ask.
You can learn more about our story through our past projects, blog, or podcast.