Mark Sweney 

From Rupert Murdoch to Thom Yorke: the growing backlash to AI

Media mogul and leading artists join fight to stop tech firms using creative works for free as training data
  
  

Thom Yorke from Radiohead performing on stage
Thom Yorke is among 25,000 artists who have signed a statement warning that AI firms training programs on their works poses ‘major, unjust threat’ to their livelihoods. Photograph: Andrew Milligan/PA

It is an unlikely alliance: the billionaire media mogul Rupert Murdoch and a panoply of leading artists including the Radiohead singer, Thom Yorke, the actors Kevin Bacon and Julianne Moore, and the author Kazuo Ishiguro.

This week, they began two very public fights with artificial intelligence companies, accusing them of using their intellectual property without permission to build the increasingly powerful and lucrative new technology.

More than 13,000 creative professionals from the worlds of literature, music, film, theatre and television released a statement warning that AI firms training programs such as ChatGPT on their works without a licence posed a “major, unjust threat” to their livelihoods. By the end of the week that number had almost doubled to 25,000.

It came a day after Murdoch, owner of the publishing group News Corp, whose newspapers include the Wall Street Journal, the Sun, the Times and the Australian, launched a legal action against the AI-powered search engine Perplexity, accusing it of “illegally copying” some of his US titles’ journalism.

The stars’ statement is a concerted effort to challenge the idea that creative works can be used as training data without recompense on the grounds of “fair use” – a US legal term meaning permission from the copyright owner is not needed. Adding to their anger is the fact these AI models can then be used to produce fresh works that compete with those of human beings.

AI was a key sticking point in last year’s dual strikes by Hollywood actors and writers, which secured agreements to ensure the new technology stays in the control of workers, rather than being used to replace them. Several ongoing legal cases are likely to decide whether the copyright battle will be similarly successful.

In the US, artists are also suing tech firms behind image generators, major record labels are suing AI music creators Suno and Udio, and a group of authors including John Grisham and George RR Martin are suing ChatGPT developer OpenAI for alleged breach of copyright.

In the battle to get AI companies to pay for the content they are scraping to build their tools, publishers are also pursuing legal avenues to get them to the negotiating table to sign licensing deals.

Publishers including the Politico-owner Axel Springer, Vogue publisher Condé Nast, the Financial Times and Reuters have content agreements in place with various AI companies and, in May, News Corp signed a five-year deal with OpenAI, reportedly valued at $250m. By contrast, the New York Times has filed a lawsuit against the ChatGPT-maker and last week sent a “cease and desist” letter to Perplexity.

In the UK, however, AI companies are lobbying to change the law to enable them to continue to build their tools without the risk of infringing intellectual property rights. Currently, the text and data mining required to train generative AI tools is only permitted for research for a non-commercial purpose.

This week, Satya Nadella, the chief executive of Microsoft, called for a rethink of what is “fair use”. He argued that the large language models that underpin generative AI are not “regurgitating” the information they are trained on, which would be considered copyright infringement.

Labour’s new minister for AI and digital government, Feryal Clark, said recently she wanted to resolve the copyright dispute between the creative industries and AI companies by the end of the year.

She said that could be in the form of an amendment to existing laws, or new legislation, opening up the possibility that a new clause allowing AI companies to scrape data for commercial purposes could be added.

“Tech companies have used lots of UK content for free to train large language models and are now lobbying to weaken UK law to cover their tracks,” said Dan Conway, the chief executive of the Publishers Association.

“A cost of your business is paying for the content you are using. Labour has a once-in-a-generation opportunity to set policy conditions for responsible AI in the UK. Licensing agreements should be being signed between creative industries and AI companies to support the UK ecosystem.”

While news groups are pushing publicly against the exploitation of their content for AI, behind the scenes many are embracing the technology to replace editorial functions, fuelling fears among staff that commercially challenged publishers will use it as a Trojan horse to enable cost savings and job cuts.

Last month, the National Union of Journalists launched a campaign to highlight the issue, titled “Journalism before Algorithms”.

“The use of AI must be considered against a background of pay stagnation, below-inflation wage increases, understaffed newsrooms, and growing redundancies,” it said. “Threats to journalists’ jobs are at the forefront of minds … AI is no substitute for genuine journalism.”

“There is the question about how much publishers are using these tools themselves,” said Niamh Burns, senior research analyst at Enders Analysis. “I think the amount of deployment is low, there is a lot of experimenting out there, but I could see a world where publishers will use some of these tools a lot. However, publishers must be realistic about the scale of efficiencies and revenue generation opportunities.”

Burns said that so far the willingness of publishers to use AI tools that directly impact or create editorial content related to how commercially pressured the media environment was for that operator.

The once mighty BuzzFeed, whose market value has fallen from $1bn at flotation in 2021 to less than $100m, has been a rapid AI adopter amid a backdrop of deep newsroom cuts and plummeting revenues.

And Newsquest, the second-biggest newspaper publisher in the embattled UK regional and local press market, has embarked on initiatives including a rapid increase in the number of “AI-assisted” journalism roles.

Quality national newspaper and media brands continue to be highly cautious, with many of them – including the Guardian – laying out strict principles to guide their work.

However, AI tools are being utilised behind the scenes, such as to categorise large datasets to then enable journalists to report new and exclusive content.

“I do think that those media companies that are most commercially at risk in the near term are also at risk of overdoing it,” said Burns.

“A lot of that is to do with commercial models. If you rely on advertising from lots of traffic on social platforms and all you need is scale, not necessarily quality, then AI could be seen to really help.

“However, generative AI content creation will never be worth the costs or risks [for quality national titles]. And for any publisher, there is a longer-term cost to quality and risk to competitiveness in producing more cookie-cutter journalism.”

 

Leave a Comment

Required fields are marked *

*

*