Nicola Davis Science correspondent 

AI mediation tool may help reduce culture war rifts, say researchers

System built by Google DeepMind team takes individual views and generates a set of group statements
  
  

Overhead view of five people sat round a table in discussion
Participants can feed back on the initial group statement to result in a second collection of AI-generated statements. Photograph: Image Source/Alamy

Artificial intelligence could help reduce some of the most contentious culture war divisions through a mediation process, researchers claim.

Experts say a system that can create group statements that reflect majority and minority views is able to help people find common ground.

Prof Chris Summerfield, a co-author of the research from the University of Oxford, who worked at Google DeepMind at the time the study was conducted, said the AI tool could have multiple purposes.

“What I would like to see it used for is to give political leaders in the UK a better sense of what people in the UK really think,” he said, noting surveys gave only limited insights, while forums known as citizens’ assemblies were often costly, logistically challenging and restricted in size.

Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the “Habermas Machine” – an AI system named after the German philosopher Jürgen Habermas.

The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected.

Participants can also feed critiques of this initial group statement back into the Habermas Machine to result in a second collection of AI-generated statements that can again be ranked, and a final revised text selected.

The team used the system in a series of experiments involving a total of more than 5,000 participants in the UK, many of whom were recruited through an online platform.

In each experiment, the researchers asked participants to respond to topics, ranging from the role of monkeys in medical research to religious teaching in public education.

In one experiment, involving about 75 groups of six participants, the researchers found the initial group statement from the Habermas Machine was preferred by participants 56% of the time over a group statement produced by human mediators. The AI-based efforts were also rated as higher quality, clearer and more informative among other traits.

Another series of experiments found the full two-step process with the Habermas Machine boosted the level of group agreement relative to participants’ initial views before the AI-mediation began. Overall, the researchers found agreement increased by eight percentage points on average, equivalent to four people out of 100 switching their view on a topic where opinions were originally evenly split.

However the researchers stress it was not the case that participants always came off the fence, or switched opinion, to back the majority view.

The team found similar results when they used the Habermas Machine in a virtual citizens’ assembly in which 200 participants, representative of the UK population, were asked to deliberate on questions relating to topics ranging from Brexit to universal childcare.

The researchers say further analysis, looking at the way the AI system represents the texts it is given numerically, shed light on how it generates group statements.

“What [the Habermas Machine] seems to be doing is broadly respecting the view of the majority in each of our little groups, but kind of trying to write a piece of text that doesn’t make the minority feel deeply disenfranchised – so it sort of acknowledges the minority view,” said Summerfield.

However the Habermas Machine itself has proved controversial, with other researchers noting the system does not help with translating democratic deliberations into policy.

Dr Melanie Garson, an expert in conflict resolution at UCL, added while she was a tech optimist, one concern was that some minorities might be too small to influence such group statements, yet could be disproportionately affected by the result.

She also noted that the Habermas Machine does not offer participants the chance to explain their feelings, and hence develop empathy with those of a different view.

Fundamentally, she said, when using technology, context is key.

“[For example] how much value does this deliver in the perception that mediation is more than just finding agreement?” Garson said. “Sometimes, if it’s in the context of an ongoing relationship, it’s about teaching behaviours.”

 

Leave a Comment

Required fields are marked *

*

*