In 2013, US robotics company Boston Dynamics revealed its new robot, Atlas. Unveiled at the Darpa Robotics Challenge, the 6ft 2in humanoid could walk on uneven ground, jump off boxes, and even climb stairs. It was like a vision frequently depicted in fiction: a robot designed to operate like us, able to take on all manner of everyday tasks. It seemed like the dawn of something. Robots were going to do all of our boring and arduous chores, and step up as elderly care workers to boot.
Since then, we’ve seen leaps forward in artificial intelligence (AI), from computer vision to machine learning. The recent wave of large language models and generative AI systems opens up new opportunities for human-computer interaction. But outside of research labs, physical robots remain largely restricted to factories and warehouses, performing very specific tasks, often behind a safety cage. Home robots are limited to vacuum cleaners and lawnmowers – not exactly Rosie the Robot.
“Robotic bodies haven’t developed substantially since the 1950s,” says Jenny Read, director of the robotics programme at the Advanced Research and Invention Agency (Aria), the UK government’s research and development body, established last year. “I’m not saying there’s been no advances, but when you look at what’s happened in computing and software, it’s really striking how little there’s been.”
Developing a robot simply takes more resources, says Nathan Lepora, a professor of robotics and AI at Bristol University. A talented individual with a computer can write an algorithm, but building a robot requires access to the physical device. “It’s a lot slower, and it’s a lot harder,” he says. “That’s fundamentally the reason why robotics is lagging behind AI.”
Research labs and companies hope to bridge this gap, with a slate of new humanoid robots in development and some starting to hit the market. Boston Dynamics retired its original hydraulic Atlas model in April and revealed a new, electric version that it intends to commercialise in the next few years and will start testing in Hyundai factories next year. Oregon-based Agility Robotics claims its Digit robot is the first humanoid to actually get paid for a job, moving boxes in a logistics facility. Elon Musk insists that Tesla’s humanoid robot, known as Optimus or Tesla Bot, will start working in its car factories next year.
But there’s still a long way to go before we see robots operating outside of tightly controlled environments. Advances in AI can only take us so far with the current hardware, says Read – and for many tasks, a robot’s physical capabilities are critical. Generative AI systems can write poetry or make pictures, but they can’t do the dirty and dangerous jobs we most want to automate. For those, you need more than a brain in a box.
* * *
A useful robot design often starts with hands. “Many of the use cases for robots really depend on being able to handle things precisely and skilfully without damaging the object,” says Read. Humans are very good at this. We can instinctively switch between lifting a dumbbell to handling an eggshell, or from chopping a carrot to stirring a sauce. We also have excellent tactile sensing, demonstrated by our ability to read Braille. In comparison, robots struggle. Read’s Aria programme, which is backed by £57m of funding, is focused on this problem.
One of the challenges of robot dexterity is scale, says Rich Walker, director of London-based Shadow Robot. In the company’s office in Camden, he shows off the Shadow Dexterous Hand. It’s the size of a man’s hand, with four fingers and a thumb, and joints that mimic knuckles. But while the digits look dainty, the hand is attached to a robot arm much wider than a human forearm, chock-full of electronics, cabling, actuators and everything else needed to operate the hand. “It’s a packing problem,” Walker says.
An advantage of a human-scale hand is that it’s the right size and shape to handle human tools. Walker gives the example of a laboratory pipette, which he’s modified with Sugru, a mouldable adhesive, to make it more ergonomic. You could attach a pipette tool directly to a robot hand, but then it would only be able to use a pipette and not, say, a pair of scissors, or a screwdriver.
But a completely human-like hand is not best for every task. Shadow Robot’s most recent hand, DEX-EE, looks rather alien. It has three digits, more like thumbs than fingers, which are notably bigger than a human’s and are covered in tactile sensors. The company designed it in collaboration with Google DeepMind, Alphabet’s AI research lab, which wanted a robot hand that could learn how to pick things up by repeatedly trying to do so – a trial-and-error approach known as reinforcement learning. But this posed challenges: robot hands are usually designed expressly not to crash into things, and are prone to break if they do so. Murilo Martins, a DeepMind research engineer, says that when he ran experiments with the original Dexterous Hand, “every half an hour I would break a tendon”.
DEX-EE prioritises robustness: a video shows the three digits happily opening and closing while being struck by a mallet. Its bigger size accommodates larger pulleys, which put less stress on the wire tendons, meaning it can reliably operate for at least 300 hours.
Even so, says Maria Bauza, a DeepMind research scientist, time with the robot is precious. Last week, DeepMind published research outlining a new training method it calls DemoStart. This takes the same trial-and-error approach but starts by using a simulated robot hand instead of a real one. After training the simulated hand to complete tasks such as tightening a nut and bolt, the researchers transferred this learned behaviour to the real DEX-EE hand. “The hands still have gone through thousands and thousands of experiments,” Bauza says. “It’s just that we don’t make them start from scratch.”
This reduces the time and cost of running experiments, making it easier to train robots that can adapt to different tasks. The skills don’t always transfer perfectly, however; while DeepMind’s simulated robot hand was able to insert a plug into a socket 99.6% of the time, the real hand only managed it 64% of the time.
The work is an example of how developments in AI and robot bodies go hand-in-robot-hand. Only through physical interactions can robots truly make sense of their environment. After all, Read points out, the large language models behind text generators such as ChatGPT were trained on a huge corpus of human language shared on the internet, “but where do I get the data about what it feels like to pick a strawberry or to make a sandwich?”
As the DeepMind robotics team writes: “A large language model could tell you how to tighten a bolt or tie your shoes, but even if it was embodied in a robot, it wouldn’t be able to perform those tasks itself.”
Martins goes a step further. He believes robotics is critical to achieving artificial general intelligence (AGI), the broad, human-equivalent intelligence many AI researchers dream of. He reasons that an AI can only really understand our world if it has a physical form. “To me, AGI doesn’t exist without an embodiment, much in the same way that human intelligence doesn’t exist without us having our own bodies,” he says.
* * *
Hands, though important, are just one body part. While Shadow Robot and others focus on fingers, an increasing number of companies and labs are developing full humanoids.
The appeal of humanoids may partly be psychological. “It’s the robot that we were all expecting – it’s like C3PO,” says Walker. But there is also a logic to using the human form as a muse. “We designed all of our environments around people,” says Jonathan Hurst, Agility Robotics’s co-founder and chief robot officer. “So having a roughly human form factor is a very good way to be able to locomote and manipulate and coexist with people.”
But a humanoid may not be the best design for every job. A wheeled robot could go anywhere a wheelchair user can, and when it comes to trickier terrain, four legs may be better than two. Boston Dynamics’s dog-like Spot can scamper across rough ground or stairs and self-right if it falls – something two-legged robots struggle with. “Just because a humanoid robot takes a similar form as a human doesn’t mean it needs to move that way and be constrained by the limitations of our joints,” adds a Boston Dynamics spokesperson, over email.
For now, humanoids are still finding their feet. Flashy videos and sleek designs may give people an unrealistic sense of how capable or reliable they are, says Bristol University’s Lepora. Boston Dynamics’s clips are impressive, but the company is also known for its blooper reels showing its robot fails. In January, Musk shared a video of Optimus folding a shirt – but keen-eyed viewers spotted telltale signs that the robot was being teleoperated.
A major challenge in bringing robots out of labs and industrial environments and into homes or public spaces is safety. In June, the Institute of Electrical and Electronics Engineers (IEEE) launched a study group to explore standards specifically for humanoid robots. Aaron Prather, the group’s chair, explains that a humanoid in a shared space is a different proposition to an industrial robot encased in protective caging. “It’s one thing for them to interact with a fellow worker at an Amazon facility or a Ford factory, because that’s a trained worker working with that robot,” he says. “[But if] I put that robot out in the public park, how’s it going to interact with kids? How’s it going to interact with folks that don’t understand what’s going on?”
Hurst envisages robots in the retail sector as a next step, stocking shelves or working in back rooms. Prather believes we’ll soon see robots waiting tables. For many applications, however, it may not make financial sense to use a robot. Walker gives the example of a delivery robot. “It’s got to be cost effective [compared] with someone on a minimum-wage, zero-hours contract on an e-scooter,” he says.
Most of the roboticists I spoke with said a multipurpose home robot – the kind that can do your dishes, wash your laundry and walk your dog – is a way off. “The era of a useful humanoid is here, but the path to a truly general-purpose humanoid robot will be long and hard and is many years away,” says Boston Dynamics. Care robots, often hyped as the solution to ageing populations, will be a particularly tough prospect, says Read. “Let’s get to the point where a robot can reliably disassemble a laptop or make you a sandwich, and then we’ll think about how it might care for an elderly person,” she says. That’s if we even want robots to take on care work. Just like art and poetry, perhaps some roles are still best with a human touch.