Depression seems a uniquely human way of suffering, but surprising new ways of thinking about it are coming from the field of artificial intelligence. Worldwide, over 350 million people have depression, and rates are climbing. The success of today’s generation of AI owes much to studies of the brain. Might AI return the favour and shed light on mental illness?
The central idea of computational neuroscience is that similar issues face any intelligent agent – human or artificial – and therefore call for similar sorts of solutions. Intelligence of any form is thought to depend on building a model of the world – a map of how things work that allows its owner to make predictions, plan and take actions to achieve its goals.
Setting the right degree of flexibility in learning is a critical problem for an intelligent system. A person’s model of the world is built up slowly over years of experience. Yet sometimes everything changes from one day to the next – if you move to a foreign country, for instance. This calls for much more flexibility than usual. In AI, a global parameter that controls how flexible a model is – how fast it changes – is called the “learning rate”.
Failure to adapt to adversity may be one of the main reasons why humans get depressed. For example, someone who becomes disabled due to a severe injury suddenly needs to learn to view themselves in a new way. A person who does so may thrive, while a person who fails to may become depressed.
The idea of a depressed AI seems odd, but machines could face similar problems. Imagine a robot with a hardware malfunction. Perhaps it needs to learn a new way of grasping information. If its learning rate is not high enough, it may lack the flexibility to change its algorithms. If severely damaged, it might even need to adopt new goals. If it fails to adapt it could give up and stop trying.
A “depressed” AI could be easily fixed by a supervisor boosting its learning rate. But imagine an AI sent light years away to another solar system. It would need to set its own learning rate, and this could go wrong.
One might think that the solution would be to keep flexibility high. But there is a cost to too much flexibility. If learning rate is too great, one is always forgetting what was previously learned and never accumulating knowledge. If goals are too flexible, an AI is rudderless, distracted by every new encounter.
The human brain’s equivalent of an AI’s key global variables is thought by computational psychiatrists to be several “neuromodulators”, including the dopamine and serotonin systems. There are only a handful of these highly privileged groups of cells and they broadcast their special chemical messages to almost the entire brain.
A line of studies from my laboratory and others suggest that the brain’s way of setting the learning rate involves the serotonin system. In the lab, if we teach a mouse a task with certain rules and then abruptly change them, serotonin neurons respond strongly. They seem to be broadcasting a signal of surprise: “Oops! Time to change the model.” Then, when serotonin is released in downstream brain areas, it can be seen in the laboratory to promote plasticity or rewiring, particularly to rework the circuitry of an outdated model.
Antidepressants are typically selective serotonin reuptake inhibitors (SSRIs), which boost the availability of serotonin in the brain. Antidepressants are naively depicted as “happiness pills”, but this research suggests that they actually work mainly by promoting brain plasticity. If true, getting out of depression starts with flexibility.
If these ideas are on the right track, susceptibility to depression is one of the costs of the ability to adapt to an ever-changing environment. Today’s AIs are learning machines, but highly specialised ones with no autonomy. As we take steps toward more flexible “general AI”, we can expect to learn more about how this can go wrong, with more lessons for understanding not only depression but also conditions such as schizophrenia.
For a human, to be depressed is not merely to have a problem with learning, but to experience profound suffering. That is why, above all else, it is a condition that deserves our attention. For a machine, what looks like depression may involve no suffering whatsoever. But that does not mean that we cannot learn from machines how human brains might go wrong.
• Zachary Mainen is a neuroscientist whose research focuses on the brain mechanisms of decision-making