Dan Milmo Global technology editor 

UK signs first international treaty to implement AI safeguards

Also signed by the EU, US and Israel, the declaration aims to mitigate the threats that AI may pose to human rights, democracy and the rule of law
  
  

The Council of Europe framework convention on artificial intelligence was signed at a conference in Vilnius. It aims to consolidate the patchwork of agreements on AI into a united, global framework.
The Council of Europe framework convention on artificial intelligence was signed at a conference in Vilnius. It aims to consolidate the patchwork of agreements on AI into a united, global framework. Photograph: Council of Europe

The UK government has signed the first international treaty on artificial intelligence in a move that aims to prevent misuses of the technology, such as spreading misinformation or using biased data to make decisions.

Under the legally binding agreement, states must implement safeguards against any threats posed by AI to human rights, democracy and the rule of law. The treaty, called the framework convention on artificial intelligence, was drawn up by the Council of Europe, an international human rights organisation, and was signed on Thursday by the EU, UK, US and Israel.

The justice secretary, Shabana Mahmood, said AI had the capacity to “radically improve” public services and “turbocharge” economic growth, but that it must be adopted without affecting basic human rights.

“This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law,” she said.

Here is an outline of the convention and its impact on AI use.

What is the aim of the convention?

According to the Council of Europe, its goal is to “fill any legal gaps that may result from rapid technological advances”. Recent breakthroughs in AI – the term for computer systems that can perform tasks typically associated with intelligent beings, such as learning and problem-solving – have triggered a regulatory scramble around the world to mitigate the technology’s potential flaws.

It means there is a patchwork of regulations and agreements covering the technology, from the EU AI Act to last year’s Bletchley declaration at the inaugural global AI safety summit – and a voluntary testing regime signed by a host of countries and companies at the same gathering. Thursday’s agreement is an attempt to create a global framework.

The treaty states that AI systems must comply with a set of principles including: protecting personal data; non-discrimination; safe development; and human dignity. As a result, governments are expected to introduce safeguards such as stemming AI-generated misinformation and preventing systems from being trained on biased data, which could result in wrongful decisions in a number of situations such as job or benefits applications.

Who is covered by the treaty?

It covers the use of AI by public authorities and the private sector. Any company or body using relevant AI systems must assess their potential impact on human rights, democracy and the rule of law – and make that information available to the public. People must be able to challenge decisions made by AI systems and be able to lodge complaints with authorities. Users of AI systems must also be given notice that they are dealing with an AI and not a human being.

How will it be implemented in the UK?

The UK now needs to see whether its various provisions are covered by existing legislation – such as the European court of human rights and other human rights laws. The government is drawing up a consultation on a new AI bill.

“Once the treaty is ratified and brought into effect in the UK, existing laws and measures will be enhanced,” said the government.

In terms of imposing sanctions, the convention refers to authorities being able to ban certain uses of AI. For instance, the EU AI Act bans systems that use facial recognition databases scraped from CCTV or the internet. It also bans systems that categorise humans based on their social behaviour.

 

Leave a Comment

Required fields are marked *

*

*