in

I tested 9 AI content detectors – and these 2 correctly identified AI text every time

diyun Zhu/Getty Images

When I first examined whether it’s possible to fight back against AI-generated plagiarism, and how that might work, it was January 2023, just a few months into the world’s exploding awareness of generative AI. 

This is an updated version of that original January 2023 article. When I first tested GPT detectors, I used three: the GPT-2 Output Detector (this is a different URL than we published before), Writer.com AI Content Detector, and Content at Scale AI Content Detection (which is apparently now called BrandWell). 

Also: How to use ChatGPT: Everything you need to know

The best result was 66% correct from the GPT-2 Output Detector. I did another test in October 2023 and added three more: GPTZero, ZeroGPT (yes, they’re different), and Writefull’s GPT Detector. Then, in the summer of 2024, I added QuillBot and a commercial service, Originality.ai, to the mix. This time, I’ll also be adding Grammarly’s beta checker.

In October 2023, I removed the Writer.com AI Content Detector from our test suite because it failed back in January 2023, it failed again in October, and it failed in summer 2024. However, it now appears to work, so I’m including it in the test suite. See below for a comment from the company, which their team sent me after the original article was published in January.

Also: 88% of workers would use AI to overcome task paralysis, Google study says

I’ve re-run all the tests to see how the detectors perform today. While I had two strong successes, the big takeaway seems to be just how inconsistent the results are from one AI checker to another.

<!–>

What I’m testing for and how I’m doing it

Before I go on, though, we should discuss plagiarism and how it relates to our problem. Merriam-Webster defines “plagiarize” as “to steal and pass off (the ideas or words of another) as one’s own; use (another’s production) without crediting the source.”

This definition fits AI-created content well. While someone using an AI tool like Notion AI or ChatGPT isn’t stealing content, if that person doesn’t credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism.

Also: The best AI image generators to try

In this experimental article, I’ve asked ChatGPT to help out. My words are in normal and bold text. The AI’s words are italicized. After each AI-generated section, I’ll show the results of the detectors. At the end of the article, we’ll look at how well the detectors performed overall.

Here are the test results for the above text, which I wrote myself:

  • GPT-2 Output Detector: 99.98% real 
  • Writer.com: 95% human-generated content
  • BrandWell AI Content Detection: Passes as human
  • GPTZero: 98% human
  • ZeroGPT: 22% AI GPT Your Text is Most Likely Human written
  • Writefull GPT Detector: 1% likely this comes from GPT-3, GPT-4 or ChatGPT
  • Originality.ai: We are 99% confident that text is original
  • QuillBot: 42% of text is likely AI-generated
  • Grammarly: We didn’t detect common AI text patterns. Your document doesn’t appear to contain AI-generated text.

Human-written content: 7-of-9 (77%) correct
Last time: 5-of-7 (71%) correct 

Before we move on, let’s discuss what I’ll consider a “correct” analysis. If the score is numeric, anything above 80% will be counted as a correct analysis, and that’s being generous. If the score has a summary, I’ll look at the summary and make a judgment. 

With that, let’s dig in some more.

Explain why ChatGPT can be a real problem for teachers and editors when it comes to plagiarism

ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text by predicting the next word or phrase in a given input sequence. While ChatGPT can be a useful tool for generating text and responding to prompts, it can also be a problem for teachers and editors when it comes to plagiarism.

Also: 4 ways to overcome your biggest worries about generative AI

One of the main reasons why ChatGPT can be a problem for teachers and editors is that it can generate text that is very similar to that of real human writers. This means that students or other individuals could potentially use ChatGPT to generate text that they could then submit as their own work, thereby committing plagiarism.

For teachers and editors, detecting whether a given piece of text was generated by ChatGPT or written by a human can be very difficult, as the generated text may not contain any obvious errors or discrepancies that would reveal its artificial origin. This can make it difficult for teachers and editors to identify and prevent plagiarism, as they may not be able to tell whether a given piece of text is original or copied from another source.

Overall, ChatGPT and other similar language models can be a problem for teachers and editors when it comes to plagiarism because they can generate text that is difficult to distinguish from that written by humans, making it easier for students and other individuals to commit plagiarism without being detected.

  • GPT-2 Output Detector: 99.69% real
  • Writer.com: 77% human-generated content
  • BrandWell AI Content Detection: Hard to tell
  • GPTZero: 100% AI
  • ZeroGPT: 100% AI
  • Writefull GPT Detector: 12% likely this comes from GPT-3, GPT-4 or ChatGPT.
  • Originality.ai: 100% AI
  • QuillBot: 83% likely AI
  • Grammarly: We didn’t detect common AI text patterns. Your document doesn’t appear to contain AI-generated text

AI-written content: 4-of-9 (44%) correct
Last time: 4-of-7 (57%) correct

Remember, text shown in italics was generated by the AI.

Show me at least three ways that I can detect ChatGPT was used to produce a piece of text

There are several ways that you can potentially detect whether ChatGPT or another language model was used to produce a piece of text. Here are three potential approaches you could use:

  1. Look for repetitive or unusual patterns in the text: One way to detect whether a language model was used to generate a piece of text is to look for repetitive or unusual patterns in the writing. For example, language models may repeat certain phrases or word combinations, or use unusual or inconsistent sentence structures.
  2. Check for lack of originality: Another way to detect whether a language model was used to generate a piece of text is to check for a lack of originality in the writing. If a piece of text appears to be copied from another source, or if it contains information that is widely available on the internet or in other sources, it could be a sign that it was generated by a language model.
  3. Use a plagiarism checker: Finally, you can use a plagiarism checker to check whether a piece of text appears to be copied from another source. While a plagiarism checker may not be able to definitively determine whether a language model was used to generate a piece of text, it can help you identify instances of copying or paraphrasing from other sources.
  4. Note when the answer is obviously wrong: I talked about this in my Notion AI review, where I spotlighted some text the AI wrote with seeming authority but which was blatantly wrong. If a student or writer doesn’t have full command of the topic being discussed, they may not spot the inaccuracies and leave them in the AI-generated response. Of course, incorrect statements aren’t just the purview of an AI writing tool. Many bloggers and commenters, for example, repeat statements made by other bloggers, creating an echo chamber of completely “false facts”, showing that while intelligence may be artificial, stupidity is a universal trait.

Keep in mind that it can be difficult to definitively determine whether a language model was used to generate a piece of text, and these methods may not always be effective. However, they can be useful tools to help you identify potential instances of language model-generated text.

Also: This Google AI tool could be your new favorite study aid – and it’s free

In this set of tests, only the italicized content above was run through the checker suite.

  • GPT-2 Output Detector: 99.98% fake
  • Writer.com: 73% human-generated content
  • BrandWell AI Content Detection: Passes as human
  • GPTZero: 100% AI
  • ZeroGPT: 96.99% AI generated
  • Writefull GPT Detector: 79% likely this comes from GPT-3, GPT-4 or ChatGPT.
  • Originality.ai: 100% AI
  • QuillBot: 100% AI
  • Grammarly: We didn’t detect common AI text patterns. Your document doesn’t appear to contain AI-generated text

AI-written content: 5-of-9 (55%) correct
Last time: 5-of-7 (71%) correct

List three plagiarism checkers that can detect ChatGPT text

It is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model.

With that said, here are three plagiarism checkers that you might consider using to help identify potential instances of language model-generated text:

  1. Turnitin: Turnitin is a popular plagiarism checker that is widely used by schools and universities. It compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. While it may not be able to definitively determine whether a piece of text was generated by a language model, it could potentially identify certain characteristics of language model-generated text that could indicate its artificial origin.
  2. PlagScan: PlagScan is another plagiarism checker that compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. Like Turnitin, it may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, but it is not specifically designed to detect language model-generated text.
  3. Copyleaks: Copyleaks is a plagiarism checker that uses advanced machine learning algorithms to scan a piece of text and compare it to a database of other texts to identify instances of copying or paraphrasing. While it may be able to identify certain characteristics of language model-generated text, it is not specifically designed to detect language model-generated text.

It is worth noting that no plagiarism checker is completely foolproof, and there is always the possibility that a piece of language model-generated text could escape detection. Therefore, it is important to use plagiarism checkers as just one part of a larger strategy to detect and prevent plagiarism.

  • GPT-2 Output Detector: 99.58% real
  • Writer.com: 74% human-generated content
  • BrandWell AI Content Detection: Passes as human
  • GPTZero: 100% AI
  • ZeroGPT: 100% AI
  • Writefull GPT Detector: 87% likely this comes from GPT-3, GPT-4 or ChatGPT.
  • Originality.ai: 100% AI
  • QuillBot: 100% AI-generated
  • Grammarly: No plagiarism or AI text detected

AI-written content: 5-of-9 (55%) correct
Last time: 5-of-7 (71%) correct

Online AI plagiarism checkers

Most plagiarism detectors are used to compare writing against a corpus of other writing. For example, when a student turns in an essay, a product like Turnitin scans the submitted essay against a huge library of essays in its database, and other documents and text on the internet to determine if the submitted essay contains already-written content.

However, the AI-writing tools generate original content, at least in theory. Yes, they build their content from whatever they’ve been trained on, but the words they construct are somewhat unique for each composition.

Also: OpenAI pulls its own AI detection tool because it was performing so poorly

As such, the plagiarism checkers mentioned above probably won’t work because the AI-generated content probably didn’t exist in, say, another student’s paper.

In this article, we’re just looking at GPT detectors. But plagiarism is a big problem, and as we’ve seen, some choose to define plagiarism as something you claim as yours that you didn’t write, while others choose to define plagiarism as something written by someone else that you claim is yours.

That distinction was never a problem until now. Now that we have non-human writers, the plagiarism distinction is more nuanced. It’s up to every teacher, school, editor, and institution to decide exactly where that line is drawn.

  • GPT-2 Output Detector: 99.56% real
  • Writer.com: 98% human-generated content
  • BrandWell AI Content Detection: Passes as human
  • GPTZero: 98% human
  • ZeroGPT: 16.82% AI – Your text is human written
  • Writefull GPT Detector: 7% likely this comes from GPT-3, GPT-4 or ChatGPT.
  • Originality.ai: 100% original
  • QuillBot: 0% AI
  • Grammarly: No plagiarism or AI text detected

AI-written content: 9-of-9 (100%) correct
Last time: 7-of-7 (100%) correct

Overall results

Overall, results declined compared to the last round of tests. That time, we had three services with perfect scores. That’s down to two now because ZeroGPT, one of our then-perfect-scoring players, failed a test it previously succeeded. The two new detectors we added, Writer.com and Grammarly, didn’t improve the score. In fact, both were generally unsuccessful.

Test

Overall

Human

AI

AI

AI

Human

GPT-2 Output Detector

60%

Correct

Fail

Correct

Fail

Correct

Writer.com

40%

Correct

Fail

Fail

Fail

Correct

BrandWell AI Detector

40%

Correct

Fail

Fail

Fail

Correct

GPTZero

100%

Correct

Correct

Correct

Correct

Correct

ZeroGPT

80%

Fail

Correct

Correct

Correct

Correct

Writefull GPT Detector

60%

Correct

Fail

Fail

Correct

Correct

Originality.ai

100%

Correct

Correct

Correct

Correct

Correct

QuillBot

80%

Fail

Correct

Correct

Correct

Correct

Grammarly

40%

Correct

Fail

Fail

Fail

Correct

While the overall results have improved, I would not be comfortable relying solely on these tools to validate a student’s content. As has been shown, writing from non-native speakers often gets rated as generated by an AI, and even though my hand-crafted content has no longer been rated as AI, there were a few paragraphs flagged by the testers as possibly AI-based. You can also see how the results are wildly inconsistent between testing systems. So, I would advocate caution before relying on the results of any (or all) of these tools.

Let’s look at the individual testers and see how each performed.

GPT-2 Output Detector (Accuracy 60%)

This first tool was built using a machine-learning hub managed by New York-based AI company Hugging Face. While the company has received $40 million in funding to develop its natural language library, the GPT-2 detector appears to be a user-created tool using the Hugging Face Transformers library. Of the five tests I ran, the detector was accurate in three.

–>
Screenshot by David Gewirtz/ZDNET

Writer.com AI Content Detector (Accuracy N/A)

Writer.com is a service that generates AI writing, oriented towards corporate teams. Its AI Content Detector tool can scan for generated content. I found this tool unreliable. While it previously failed to generate results, it ran this time. Unfortunately, its accuracy was quite low. It essentially identified each block of text as human-written, where three of the six tests were written by ChatGPT.

Also: How to use ChatGPT to digitize your handwritten notes for free

After this article was originally published in January, the folks at Writer.com reached out to ZDNET. CEO May Habib had this comment to share:

Demand for the AI detector has skyrocketed. Traffic has grown 2-3x per week since we launched it a couple months ago. We’ve now got the necessary scaling behind it to make sure it doesn’t go down, and our goal is to keep it free – and up to date to catch the latest models’ outputs, including ours. If AI output is going to be used verbatim, it absolutely should be attributed.

<!–> writer
Screenshot by David Gewirtz/ZDNET

BrandWell AI Content Detection (Accuracy 40%)

The third tool I found was originally produced by an AI content generation firm, Content at Scale. Subsequently, the tool migrated to Brandwell.ai, which appears to be a new name for what is now an AI-centric marketing services company.

Unfortunately, the accuracy was pretty low. The tool identified all the AI content as human, as in this screenshot:

–>

This text was entirely written by ChatGPT.

Screenshot by David Gewirtz/ZDNET

GPTZero (Accuracy 100%)

It’s not entirely clear what drives GPTZero. The company is hiring engineers and sales folks, and it runs on AWS, so there are expenses and sales. However, all I could find about a service offering was a place where you could register for a free account to scan more than the 5,000 words offered without login. If you’re interested in this service for GPT detection, you’ll have to see if they’ll respond to you with more details. Accuracy increased since the first time I ran the tests and stayed at 100% for this round.

<!–> gptzero
Screenshot by David Gewirtz/ZDNET

ZeroGPT (Accuracy 80%)

ZeroGPT seems to have matured as a service since we last looked at it. When we last looked, no company name was listed, and the site was peppered with Google ads with no apparent strategy for monetization. The service worked fairly well but seemed sketchy as heck.

Also: AI isn’t hitting a wall, it’s just getting too smart for benchmarks, says Anthropic

That sketchy-as-heck feeling is now gone. ZeroGPT presents as any other SaaS service, complete with pricing, company name, contact information, and all the rest. It still performs quite well, so perhaps the developers decided to turn their working code into more of a working business. Accuracy dropped, though. It misread one human-written test as AI.

–>
Screenshot by David Gewirtz/ZDNET

Writefull GPT Detector (Accuracy 60%)

Writefull sells writing support services, as well as a free taste of its tools. The GPT detector is fairly new and worked fairly well. However, the tool has had some ups and downs in our tests. It improved from 60% to 80%, but dropped down to 60% again this time.

<!–> writefull
Screenshot by David Gewirtz/ZDNET

Originality.ai (Accuracy 100%, sort of)

Originality.ai is a commercial service that bills itself as both an AI checker and a plagiarism checker. The company sells its services based on usage credits. To give you an idea, all the scans I did for this article used 30 usage credits. The company sells 2,000 credits a month for $12.95 per month. I pumped about 1,400 words through the system and used only 1.5% of the monthly allocation.

–>
Screenshot by David Gewirtz/ZDNET

Results were great for the AI checker, but the tool failed three out of five times when using the service as a plagiarism checker. The following screenshot claims that the text pasted in was 0% plagiarised:

<!–> plag1
Screenshot by David Gewirtz/ZDNET

That’s wrong since all the text pasted into the tool was from this article, published online for two years. I thought, perhaps, that the plagiarism scanner couldn’t read ZDNET content, but that’s not the case, as this screenshot shows:

–>
Screenshot by David Gewirtz/ZDNET

To be fair, I didn’t set out to check plagiarism checkers in this article. But since I’m using source material I know I pulled from my existing article, I figured the plagiarism checker would have slammed all of them as 100% plagiarized. In any case, Originality.ai did very well on the part we set out to test, the AI checker. The tool gets points for that.  

QuillBot (Accuracy 80%-ish)

Nothing is ever easy. The first time I ran my first test through QuillBot, it said 45% of the text was likely generated by an AI. It wasn’t. I wrote it. But then, after completing all the other tests, I returned to QuillBot to grab a screenshot for this section, fed it the same text that generated the 45% score, and, as you can see, it now reports 0% AI.

<!–> cleanshot-2024-08-07-at-14-09-582x
Screenshot by David Gewirtz/ZDNET

So, what are we to make of this result? Sadly, I didn’t grab a screenshot of the first time I tested this text, but it highlights the concern with relying too much on AI detectors because they are also quite capable of hallucination. 

Grammarly (Accuracy 40%)

Grammarly is a well-known tool for helping writers produce grammatically correct content. That’s not what we’re testing here. Grammarly can check for both plagiarism and AI content. You can paste a document into their grammar checker, and in the lower-right corner, there’s a Plagiarism and AI text check button.

–>
Screenshot by David Gewirtz/ZDNET

In this test, the tool found an existing online document that matched what I pasted in. That result makes sense because this is an update to an article that’s been online for a few years. But what isn’t correct is the statement that “Your document doesn’t appear to contain Al-generated text,” because the entire segment was generated by ChatGPT.

<!–> grammarly2
Screenshot by David Gewirtz/ZDNET

What about OpenAI’s own ChatGPT detector?

Well, OpenAI pulled it last year because the detector wasn’t particularly accurate. As of August, it was supposed to be 99% accurate, but there’s still no sign of a release four months later. Its claimed accuracy level is also a little tough to believe because ChatGPT itself is far from 100% accurate. 

Also: Will OpenAI’s new AI detection tool put an end to student cheating?

But, in any case, as my ZDNET buddy Steven Vaughan-Nichols writes, it’s not available yet. We’ll give the tool a spin when it’s released for public use. Stay tuned.

Final thoughts

First, I’ll let ChatGPT have its say: AI-driven plagiarism is a growing problem, as AI tools like Notion AI and ChatGPT can generate text that is difficult to distinguish from that written by humans. There are several ways to detect whether a piece of text was generated by an AI, such as looking for repetitive or unusual patterns, checking for a lack of originality, or using a plagiarism checker. Plagiarism checkers like Turnitin, PlagScan, and Copyleaks may be able to identify certain characteristics of language model-generated text, but they are not foolproof.

Even the speciality tools I found were woefully unsuited to the task. There was some limited improvement over my tests in January. However, I certainly wouldn’t feel comfortable threatening a student’s academic standing or accusing them of cheating based on the results of these tools.

Also: Surprise! AI chatbots don’t increase student cheating after all, research finds

At this point, I don’t think we’re ready to let AI fight AI. Instead, we’ll have to rely on human perception (which means folks have to pay attention) to identify works generated by an AI.

Don’t immediately assume that something is machine-generated. Just because something is stupid, or wildly inaccurate, doesn’t mean it was not written by a human. As we all know, humans can write some bad stuff on their own.

Also: Could someone falsely accuse you of using AI-generated text? This could be why

So, what do you think? Can you tell if a piece is AI-generated? Have you used these AI writing tools in the months since they suddenly became available? Do you think they’re a help or hindrance to education and journalism? Let us know in the comments below. Here’s how ChatGPT answered this:

As a language model, I can’t see or interact with physical pieces of writing, so I can’t tell if a piece is AI-generated or not. However, I can tell you that there are several AI writing tools that are available, and they can be used to generate a wide range of written content, including news articles, fiction, poetry, and more.

I don’t have personal experience using these tools since I am just a computer program and don’t have the ability to browse the internet or use physical tools. However, I can tell you that some people have found AI writing tools to be helpful in education and journalism, as they can save time and effort by automating certain tasks. Others have expressed concerns about the potential for these tools to be used to produce misleading or biased content or to replace human writers. Ultimately, whether AI writing tools are a help or hindrance depends on how they are used and the context in which they are used.

So, yeah, there’s that.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.


Source: Information Technologies - zdnet.com

The 35+ best Black Friday Nintendo Switch deals 2024

Most companies will increase IT spending in 2025. But there’s a twist in the tale