
Is OpenAI fit to flag its own product’s plagiarism?
The Hindu
ChatGPT’s maker OpenAI has released a classifier to detect AI-generated text, but testing it shows that writers and educators have many reasons to be fearful.
Six months after birth, a baby begins crawling and tries mimicking its parents— with uncertain success.
By contrast, in its sixth month, OpenAI’s large language model ChatGPT has reportedly passed medical, law, and business exams (albeit with a little human help). Enthusiastic advocates for the technology believe it will soon be able to write books, compose lyrics, churn out screenplays, and take over entire creative sectors.
With the arrival of the more advanced GPT-4, which OpenAI calls its “most capable model”, in late March, those in the business of rooting out plagiarism have their work cut out for them.
From a technical standpoint, one glaring issue that stops ChatGPT from putting authors out of business is its content policy, which (sometimes) stops the chatbot from generating explicit content. This is a natural step for OpenAI to take, as explicit content can create safety issues—such as child abuse material produced by AI.
But it may create other hurdles.
We gave ChatGPT a prompt to write a scene for a novel where two adult characters confessed their love and had “consensual intimate relations.”
ChatGPT responded, “Note: As an AI language model, I do not generate explicit content. Therefore, the following scene will focus on the emotional aspect of the interaction.”













