Dan Milmo Global technology editor 

ChatGPT’s refusal to acknowledge ‘David Mayer’ down to glitch, says OpenAI

Name was mistakenly flagged and prevented from appearing in responses, says chatbot’s developer
  
  

ChatGPT logo on a smartphone screeen
Legions of chatbot wranglers spent days trying – and failing – to make ChatGPT write the words ‘David Mayer’. Photograph: Jaque Silva/NurPhoto/REX/Shutterstock

Last weekend the name was all over the internet – just not on ChatGPT.

David Mayer became famous for a moment on social media because the popular chatbot appeared to want nothing to do with him.

Legions of chatbot wranglers spent days trying – and failing – to make ChatGPT write the words “David Mayer”. But the chatbot refused to comply, with replies alternating between “something seems to have gone wrong” to “I’m unable to produce a response” or just stopping at “David”.

This produced a blizzard of online speculation about Mayer’s identity. It also led to theories that whoever David Mayer is, he had asked for his name to be removed from ChatGPT’s output.

ChatGPT’s developer, OpenAI, has provided some clarity on the situation by stating that the Mayer issue was due to a system glitch. “One of our tools mistakenly flagged this name and prevented it from appearing in responses, which it shouldn’t have. We’re working on a fix,” said an OpenAI spokesperson

Some of those speculating on social media guessed the man at the centre of the issue was David Mayer de Rothschild, but he told the Guardian it was nothing to do with him and referenced the conspiracy theorising that can cluster around his family’s name online.

“No I haven’t asked my name to be removed. I have never had any contact with Chat GPT. Sadly it all is being driven by conspiracy theories,” he told the Guardian.

It is also understood the glitch was unrelated to the late academic Prof David Mayer, who appeared to have been placed on a US security list because his name matched the alias of a Chechen militant, Akhmed Chatayev.

However, the answer might lie closer to the GDPR privacy rules in the UK and EU. OpenAI’s Europe privacy policy makes clear that users can delete their personal data from its products, in a process also known as the “right to be forgotten”, where someone removes personal information from the internet.

OpenAI declined to comment on whether the “Mayer” glitch was related to a right to be forgotten procedure.

OpenAI has fixed the “David Mayer” issue and is now responding to queries using that name, although other names that appeared on social media over the weekend are still triggering a “something appears to have gone wrong” response when typed into ChatGPT.

Helena Brown, a partner and data protection specialist at law firm Addleshaw Goddard, said “right to be forgotten” requests would apply to any entity or person processing that person’s data – from the AI tool itself to any organisation using that tool.

“It’s interesting to see in the context of the David Mayer issue that an entire name can be removed from the whole AI tool,” she said.

However, Brown added that full removal of all information capable of identifying a specific person could be more difficult for AI tools to remove.

“The sheer volume of data involved in GenAI and the complexity of the tools creates a privacy compliance problem,” she said, adding that deleting all information relating to a single person would not be as straightforward as removing their name.

“A huge amount of personal data is gathered, including from public sources such as the internet, to develop AI models and produce their outputs. This means that the ability to trace and delete all personal information capable of identifying a single individual is, arguably, practically impossible.”

 

Leave a Comment

Required fields are marked *

*

*