Gaby Hinsliff 

Sorry, Labour, but ChatGPT teachers are a lesson in how not to transform our schools

The government seems to think AI will allow it to do more with less. But there are plenty of reasons to be doubtful, writes Guardian columnist Gaby Hinsliff
  
  

Keir Starmer has promised to ‘move forward this atutumn with harnessing the full potential of AI’ in schools
Keir Starmer has promised to ‘move forward this atutumn with harnessing the full potential of AI’ in schools. Photograph: Graeme Robertson/The Guardian

Like so many shiny-eyed new teachers, Ed began his career amid high hopes. He was going to be a gamechanger, his bosses thought; a breath of fresh air, capable of engaging even kids at risk of dropping out. Ed had been trained not only to tailor lessons to each child’s individual needs, but patiently to field all those time-consuming parental questions about everything from teenage mental health to what their little darlings were getting for lunch.

Unfortunately, the one thing Ed had in common with many promising new teachers is that he burned out fast. Launched in March, by June he was being unceremoniously relieved of his duties after the tech company paid to develop him for the Los Angeles school district reportedly got into financial difficulties. For Ed wasn’t a teacher but a $6m AI-powered chatbot designed to act as a personalised learning assistant for children, and his brief career offers a timely lesson in how not to transform public services using artificial intelligence.

The idea of using technology as a kind of magic bullet enabling the state to do more with less has become increasingly central to Labour’s plans for reviving British public services on what Rachel Reeves suggests will be a painfully tight budget. In a series of back-to-school interventions this week, Keir Starmer promised to “move forward with harnessing the full potential of AI”, while the science secretary, Peter Kyle, argued that automating some routine tasks, such as marking, could free up valuable time for teachers to teach.

A recent report from Tony Blair’s eponymous thinktank, which has close links to tech industry donors and is positively evangelical about AI, suggests automating marking and lesson planning could cut teaching workloads by a staggering 25% – roughly the equivalent of the much resented 12 hours’ average unpaid overtime that eats into teachers’ evenings and weekends.

But that’s just the beginning, the report’s authors argue: in theory Britain could save an astonishing £40bn a year by getting AI to take over backroom tasks across the public sector. Imagine, they argue, being able to process planning applications or benefit claims at lightning speed, instead of making people wait months to discover their fate. Imagine systems that could cut waiting lists by better managing NHS beds, or reduce pressure on hospitals by predicting who was likely to fall seriously ill before it happened. Some of these dreams are already reality, with a pilot scheme in Somerset that analysed patterns in vulnerable patients’ data succeeding in cutting A&E visits by an impressive 60%. But are parts of this grand vision, much like the ill-fated Ed, just too good to be true?

The obvious lesson from Los Angeles is to beware of being sold a pup by a hype-driven, profit-hungry industry still in its infancy. A group of worried headteachers convened by the independent school head (and Downing Street historian) Anthony Seldon last year warned that schools were “bewildered by the very fast rate of change in AI”, and didn’t know whom to trust, concluding: “We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools.”

The scheme Kyle unveiled this week, a £4m government-backed project to train AI on approved lesson plans and anonymised pupil assessments, suggests he is well aware of these fears and keen to create tools teachers can have confidence in. But the real political challenge may be persuading a suspicious public to embrace this faceless, futuristic state.

Most parents would probably be happy enough to have a six-year-old’s spelling test machine-marked – at least once someone fixes whatever glitch made ChatGPT unable to tell how many “r’s there are in strawberry” – and many primary schools already set maths homework via apps that automatically check answers and record pupils’ progress. But I wouldn’t trust a language bot to analyse a GCSE English essay yet, even if AI’s current alarming tendency to invent facts for no obvious reason can be ironed out.

Memories of that lockdown summer when exam boards used an algorithm to award GCSE and A-level grades, provoking a furious backlash from parents, meanwhile suggest the public is far from ready for AI to make potentially life-and-death judgments on someone’s eligibility for asylum, say, or disability benefits. Those eye-catching savings identified by the Tony Blair Institute, meanwhile, are achieved partly by doing away with the need for more than 1m civil service jobs – presumably over the unions’ dead body.

Beyond all this, meanwhile, lies a tricky question about the nature and pace of work. The siren promise of new technology is always to take care of the boring stuff, freeing humans up for something more fun – or, as the Tony Blair Institute report puts it: “turning the public sector into a rewarding career of choice for ambitious people working at the cutting edge”.

Remove the relaxingly humdrum bits you can do in your sleep, however, and a job may become not just more exciting but also significantly more stressful – as GPs discovered when practice nurses began taking on more routine tasks, leaving doctors dealing with difficult cases back to back and sometimes pleading burnout as a result. It’s true that AI, handled right, has enormous capacity for good. But as Starmer himself keeps saying, there are no easy answers in politics – not even, it turns out, if you ask ChatGPT.

  • Gaby Hinsliff is a Guardian columnist

 

Leave a Comment

Required fields are marked *

*

*