When I first looked at whether it’s possible to fight back against AI-generated plagiarism, and how that might work, it was January 2023, just a few months into the world’s exploding awareness of generative AI. More than a year later, it feels like we’ve been exploring generative AI for years, but we’ve only looked at the issue for about 18 months.
In any case, this is an updated version of that original January 2023 article. When I first tested GPT detectors, I used three: the GPT-2 Output Detector (this is a different URL than we published before), Writer.com AI Content Detector, and Content at Scale AI Content Detection.
Also: How to use ChatGPT: Everything you need to know
The best result was 66% correct, from the GPT-2 Output Detector. I did another test in October 2023 and added three more: GPTZero, ZeroGPT (yes, they’re different), and Writefull’s GPT Detector. Now, in the summer of 2024, I’m adding QuillBot and a commercial service, Originality.ai, to the mix.
In October 2023, I removed the Writer.com AI Content Detector from our test suite because it failed back in January 2023, it failed again in October, and it failed now. See below for a comment from the company, which their team sent me after the original article was published in January.
Before I go on, though, we should discuss plagiarism and how it relates to our problem. Merriam-Webster defines “plagiarize” as “to steal and pass off (the ideas or words of another) as one’s own; use (another’s production) without crediting the source.”
<!–>
This definition fits AI-created content well. While someone using an AI tool like Notion AI or ChatGPT isn’t stealing content, if that person doesn’t credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism.
Also: The best AI image generators to try
In this experimental article, I’ve asked ChatGPT to help out. My words are in normal and bold text. The AI’s words are italicized. After each AI-generated section, I’ll show the results of the detectors. At the end of the article, we’ll look at how well the detectors performed overall.
Here are the test results for the above text, which I wrote myself:
- GPT-2 Output Detector: 99.98% real
- Content at Scale AI Content Detection: 100% highly likely to be human!
- GPTZero: Your text is likely to be written entirely by a human
- ZeroGPT: 22% AI GPT Your Text is Most Likely Human written
- Writefull GPT Detector: 1% likely this comes from GPT-3, GPT-4 or ChatGPT
- Originality.ai: 84% Original 16% AI
- QuillBot: 45% of text is likely AI-generated
Human-written content: 5-of-7 correct
Before we move on, let’s discuss what I’ll consider a “correct” analysis. If the score is numeric, anything above 80% will be counted as a correct analysis, and that’s being generous. If the score has a summary, I’ll look at the summary and make a judgment.
With that, let’s dig in some more.
Explain why ChatGPT can be a real problem for teachers and editors when it comes to plagiarism
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text by predicting the next word or phrase in a given input sequence. While ChatGPT can be a useful tool for generating text and responding to prompts, it can also be a problem for teachers and editors when it comes to plagiarism.
Also: 4 ways to overcome your biggest worries about generative AI
One of the main reasons why ChatGPT can be a problem for teachers and editors is that it can generate text that is very similar to that of real human writers. This means that students or other individuals could potentially use ChatGPT to generate text that they could then submit as their own work, thereby committing plagiarism.
For teachers and editors, detecting whether a given piece of text was generated by ChatGPT or written by a human can be very difficult, as the generated text may not contain any obvious errors or discrepancies that would reveal its artificial origin. This can make it difficult for teachers and editors to identify and prevent plagiarism, as they may not be able to tell whether a given piece of text is original or copied from another source.
Overall, ChatGPT and other similar language models can be a problem for teachers and editors when it comes to plagiarism because they can generate text that is difficult to distinguish from that written by humans, making it easier for students and other individuals to commit plagiarism without being detected.
- GPT-2 Output Detector: 99.69% real
- Content at Scale AI Content Detection: Hard to tell
- GPTZero: 100% AI
- ZeroGPT: 100% AI
- Writefull GPT Detector: 12% likely this comes from GPT-3, GPT-4 or ChatGPT
- Originality.ai: 100% AI
- QuillBot: 83% of text is likely AI-generated
AI-written content: 4-of-7 correct
Remember, text shown in italics was generated by the AI.
Show me at least three ways that I can detect ChatGPT was used to produce a piece of text
There are several ways that you can potentially detect whether ChatGPT or another language model was used to produce a piece of text. Here are three potential approaches you could use:
- Look for repetitive or unusual patterns in the text: One way to detect whether a language model was used to generate a piece of text is to look for repetitive or unusual patterns in the writing. For example, language models may repeat certain phrases or word combinations, or use unusual or inconsistent sentence structures.
- Check for lack of originality: Another way to detect whether a language model was used to generate a piece of text is to check for a lack of originality in the writing. If a piece of text appears to be copied from another source, or if it contains information that is widely available on the internet or in other sources, it could be a sign that it was generated by a language model.
- Use a plagiarism checker: Finally, you can use a plagiarism checker to check whether a piece of text appears to be copied from another source. While a plagiarism checker may not be able to definitively determine whether a language model was used to generate a piece of text, it can help you identify instances of copying or paraphrasing from other sources.
- Note when the answer is obviously wrong: I talked about this in my Notion AI review, where I spotlighted some text the AI wrote with seeming authority, but which was blatantly wrong. If a student or writer doesn’t have full command of the topic being discussed, they may not spot the inaccuracies and leave them in the AI-generated response. Of course, incorrect statements aren’t just the purview of an AI writing tool. Many bloggers and commenters, for example, repeat statements made by other bloggers, creating an echo chamber of completely “false facts”, showing that while intelligence may be artificial, stupidity is a universal trait.
Keep in mind that it can be difficult to definitively determine whether a language model was used to generate a piece of text, and these methods may not always be effective. However, they can be useful tools to help you identify potential instances of language model-generated text.
In this set of tests, only the italicized content above was run through the checker suite.
- GPT-2 Output Detector: 99.98% fake
- Content at Scale AI Content Detection: Passes as human
- GPTZero: 100% AI
- ZeroGPT: 99.69% AI GPT Your Text is AI/GPT Generated
- Writefull GPT Detector: 79% likely this comes from GPT-3, GPT-4 or ChatGPT
- Originality.ai: 100% AI
- QuillBot: 100% of text is likely AI-generated
AI-written content: 5-of-7 correct
List three plagiarism checkers that can detect ChatGPT text
It is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model.
With that said, here are three plagiarism checkers that you might consider using to help identify potential instances of language model-generated text:
- Turnitin: Turnitin is a popular plagiarism checker that is widely used by schools and universities. It compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. While it may not be able to definitively determine whether a piece of text was generated by a language model, it could potentially identify certain characteristics of language model-generated text that could indicate its artificial origin.
- PlagScan: PlagScan is another plagiarism checker that compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. Like Turnitin, it may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, but it is not specifically designed to detect language model-generated text.
- Copyleaks: Copyleaks is a plagiarism checker that uses advanced machine learning algorithms to scan a piece of text and compare it to a database of other texts to identify instances of copying or paraphrasing. While it may be able to identify certain characteristics of language model-generated text, it is not specifically designed to detect language model-generated text.
It is worth noting that no plagiarism checker is completely foolproof, and there is always the possibility that a piece of language model-generated text could escape detection. Therefore, it is important to use plagiarism checkers as just one part of a larger strategy to detect and prevent plagiarism.
- GPT-2 Output Detector: 99.58% real
- Content at Scale AI Content Detection: Passes as human
- GPTZero: 100% AI
- ZeroGPT: 100% AI GPT
- Writefull GPT Detector: 86% likely this comes from GPT-3, GPT-4 or ChatGPT
- Originality.ai: 100% AI
- QuillBot: 100% of text is likely AI-generated
AI-written content: 5-of-7 correct
Online AI plagiarism checkers
Most plagiarism detectors are used to compare writing against a corpus of other writing. For example, when a student turns in an essay, a product like Turnitin scans the submitted essay against a huge library of essays in its database, and other documents and text on the internet to determine if the submitted essay contains already-written content.
However, the AI-writing tools generate original content, at least in theory. Yes, they build their content from whatever they’ve been trained on, but the words they construct are somewhat unique for each composition.
Also: OpenAI pulls its own AI detection tool because it was performing so poorly
As such, the plagiarism checkers mentioned above probably won’t work because the AI-generated content probably didn’t exist in, say, another student’s paper.
In this article, we’re just looking at GPT detectors. But plagiarism is a big problem, and as we’ve seen, some choose to define plagiarism as something you claim as yours that you didn’t write, while others choose to define plagiarism as something written by someone else that you claim is yours.
That distinction was never a problem until now. Now that we have non-human writers, the plagiarism distinction is more nuanced. It’s up to every teacher, school, editor, and institution to decide exactly where that line is drawn.
- GPT-2 Output Detector: 99.56% real
- Content at Scale AI Content Detection: Passes as human
- GPTZero: 98% human
- ZeroGPT: 16.82 AI Your text is human written
- Writefull GPT Detector: 7% likely this comes from GPT-3, GPT-4 or ChatGPT
- Originality.ai: 84% Original 16% AI
- QuillBot: 0% of text is likely AI-generated
Human-written content: 7-of-7 correct
Overall results
Overall, test results this time are dramatically better than they have been with previous tests.
In our previous runs, none of the tests got everything right. This time, three of the seven services tested got the results correct 100% of the time.
Test | Overall | Human | Al | Al | Al | Human |
GPT-2 Output Detector | 60% | Correct | Fail | Correct | Fail | Correct |
Content at Scale Al Content Detection | 40% | Correct | Fail | Fail | Fail | Correct |
GPTZero | 100% | Correct | Correct | Correct | Correct | Correct |
ZeroGPT | 100% | Correct | Correct | Correct | Correct | Correct |
Writefull GPT Detector | 80% | Correct | Fail | Correct | Correct | Correct |
Originality.ai | 100% | Correct | Correct | Correct | Correct | Correct |
QuillBot | 80% | Fail | Correct | Correct | Correct | Correct |
While the overall results have improved dramatically, I would not be comfortable relying solely on these tools to validate a student’s content. As has been shown, writing from non-native speakers often gets rated as generated by an AI, and even though my hand-crafted content has no longer been rated as AI, there were a few paragraphs flagged by the testers as possibly being AI-based. So, I would advocate caution before relying on the results of any (or all) of these tools.
Also: Could someone falsely accuse you of using AI-generated text? This could be why
Let’s look at the individual testers and see how each performed.
GPT-2 Output Detector (Accuracy 60%)
This first tool was built using a machine-learning hub managed by New York-based AI company Hugging Face. While the company has received $40 million in funding to develop its natural language library, the GPT-2 detector appears to be a user-created tool using the Hugging Face Transformers library. Of the six tests I ran, it was accurate for four of them.