Why AI detectors are not a real thing

Why AI detectors are not a real thing. (Pixabay/franganillo)

AI detectors often struggle to differentiate between AI-generated and human-generated text, leading to unreliable results. This is a problem since many people use it to determine a writing’s authenticity.

These detectors have been booming ever since AI tools began to be used in the content creation industry, specifically writing. They can be very helpful, but they are not perfect. These detectors often fail to distinguish between human and AI writing. Not many people are aware of this and still rely on AI detectors. This has caused many problems among students, professors, and professional writers.

AI can help writers by giving them ideas, gathering sources, and improving their writing in many ways. However, some people take it too far and use AI to write the entire piece. Since the rise of ChatGPT, some students have been using it to cheat on their assignments. Some professors and professional writers have also relied on ChatGPT. 

For that reason, teachers and examiners have been using AI detectors to ensure fair judgment or grading. The problem is, AI detectors target not only those who use AI. Innocent writers have also been accused of using AI. This would not happen if the AI detector had 100% accuracy as it is advertised. It is for that reason that AI detectors are believed to be unreliable.

Limitations of AI detectors 

There are flaws in the mechanism of AI detectors. This tool learns to differentiate between AI writing and human writing through Machine Learning (ML) and Natural Language Processing (NLP). It generally learns from given samples and categorizes a writing as AI or human. Then, when it is given unlearned data–which are data provided by users—it will independently classify it.

There are several reasons why this could fail, but most of these reasons can be traced back to insufficient training data. More and more improvements have been made to AI writing, making AI detectors outdated and unable to analyze properly. As a result, many innocent people have been accused of using AI, while those who use AI evade punishment. 

Fallacy of defining human-written content

As stated before, AI detectors rely on training data. This opens the possibility that the data fed can be biased towards certain writing styles, cultural differences, or an individual’s preference in language use. The training data may not represent the full spectrum of human writing styles, which makes the AI detectors quite selective.

Furthermore, AI writing tools were trained on human-generated text. They can mimic their writing styles by analyzing individual words, their relationships, and how humans typically use them. So even though it can produce a piece, they were still influenced or ‘inspired’ by humans. This is why AI detectors are often misled and give false positives. 

Using AI to guide your writing can also lead to your text being flagged as AI-generated in some cases. For example, if you were to write a text with a structure generated by ChatGPT, your text is more likely to be flagged as AI-generated. This is because AI detection searches for patterns and styles that it is familiar with. 

Misuse and implications of AI detectors 

The unreliability of AI detectors has caused many troubles. Innocent people were wrongly accused of using AI, leading to unintended yet serious consequences such as damaged reputations and academic performances. Non-native English speakers may have been accused more often for their unique writing styles.

Meanwhile, those who actually use AI can simply manipulate the AI detectors. There are several ways to get out of their radar that were spread among AI users. One of them is by combining multiple prompts. This can overwhelm the AI detectors, giving them a false impression that humans wrote the text. Another simple way is to insert minor punctuation “mistakes” to impersonate human errors, such as adding a space before a coma.

Rephrasing and rewording also do the trick. AI writing is known to have repetitive sentence structure. By adding variety, the text is likely to be seen as human. AI users don’t even have to put in the work to rephrase and reword, as many AI tools can paraphrase your text. Other than that, some experts in the AI field can directly manipulate AI writing tools or AI detectors by messing with their systems. 

Conclusion

A few studies suggested that AI detectors are unreliable. A Stanford University study shows that about 61% of non-native speakers’ writings are flagged as AI-generated, whereas only 5% of native English speakers’ writings are flagged as so. By modifying the former’s writings with enhanced vocabulary, the percentage went down to 11.6%. Even the US Constitution–written in 1789–was flagged as AI-generated text by ZeroGPT. 

At the same time, Quillbot and other AI detectors flagged the US Constitution as nearly 100% human-made. Therefore, there is still hope that AI detectors can be reliable tools and serve their purpose to determine the authenticity of many works. This opens the possibility of fair judgement without skepticism in the writing industry.  For now, though, AI detectors should not be fully trusted until further improvements are made.


Origin Hope makes content impossible, possible - for any client, anywhere. It fits around any client’s changing needs - maintaining service 24/7 and long into the future - and at unmatched costs, with rates 50-95% cheaper than anything else on the market. Origin Hope transforms what clients can do by packaging its world-leading tech and workflows - powered by newsroom efficiency, optimized processes, AI technologies, all wrapped in superb customer service that manages production and delivery. Origin Hope works for anyone, anywhere. Tap here.