OpenAI has been working on a watermarking system for ChatGPT text for over a year, but the company is still on the fence about rolling it out.


The proposed watermarking technique would create a detectable pattern in AI-generated text, helping to identify and differentiate it from human-written content. This could be a game-changer for educators and content verifiers. However, the internal debate at OpenAI is heating up.


Why the hesitation? Although watermarking is reported to be highly accurate (99.9% effective!), OpenAI is concerned about user reactions. A significant portion of users (nearly 30%) said they’d use ChatGPT less if the watermarking system were implemented. They worry it might stigmatize the technology, especially for non-native speakers, and could be easily circumvented by those with bad intentions.


In response, OpenAI is exploring alternatives like embedding metadata, which might be less controversial and more user-friendly. But it’s still early days for these new methods.