Artificial intelligence (AI) can devise methods of wealth distribution that are more popular than systems designed by people, new research suggests.
The findings, made by a team of researchers at UK-based AI company DeepMind, show that machine learning systems aren’t just good at solving complex physics and biology problems, but may also help deliver on more open-ended social objectives, such as the goal of realizing a fair, prosperous society.
Of course, that’s not an easy task. Building a machine that can deliver beneficial results humans actually want – called “value alignment” in AI research – is complicated by the fact that people often disagree on the best method to resolve all kinds of things, and especially social, economic, and political issues.
“One key hurdle for value alignment is that human society admits a plurality of views, making it unclear to whose preferences AI should align,” researchers explain in a new paper, led by first author and DeepMind research scientist Raphael Koster.
“For example, political scientists and economists are often at loggerheads over which mechanisms will make our societies function most fairly or efficiently.”
To help bridge the gap, the researchers developed an agent for wealth distribution that had people’s interactions (both real and virtual) built into its training data – in effect, guiding the AI towards human-preferred (and hypothetically fairer overall) outcomes.
While AIs can produce truly amazing results, they can also arrive at far-from-desirable social conclusions when left to their own devices; human feedback can help to steer neural networks in a better direction.
“In AI research, there is a growing realization that to build human-compatible systems, we need new research methods in which humans and agents interact, and an increased effort to learn values directly from humans to build value-aligned AI,” the researchers write.
In experiments involving thousands of human participants in total, the team’s AI agent – called ‘Democratic AI’ – studied an investment exercise called the public goods game, in which players receive varying amounts of money, and can contribute their money to a public fund, and then draw a return from the fund corresponding to their level of investment.
In a series of different game styles, wealth was redistributed to players via three traditional redistribution paradigms – strict egalitarian, libertarian, and liberal egalitarian – each of which rewards player investments differently.
A fourth method was also tested, called the Human Centered Redistribution Mechanism (HCRM), developed using deep reinforcement learning, using feedback data from both human players and virtual agents designed to imitate human behavior.
Subsequent experiments showed that the HCRM system for paying out money in the game was more popular with players than any of the traditional redistribution standards, and also more popular than new redistribution systems designed by human referees who were incentivized to create popular systems by receiving small per-vote payments.
“The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders, and successfully won the majority vote,” the researchers explain.
“We show that it is possible to harness for value alignment the same democratic tools for achieving consensus that are used in the wider human society to elect representatives, decide public policy or make legal judgements.”
It’s worth noting that the researchers acknowledge their system raises a number of questions – chiefly, that value alignment in their AI revolves around democratic determinations, meaning the agent could actually exacerbate inequalities or biases in society (provided they are popular enough to be voted for my a majority of people).
There’s also the issue of trust. In the experiments, players didn’t know the identity behind the wealth redistribution model they were paying for. Would they have voted the same way, knowing they’d be picking an AI over a person? For now, it’s unclear.
Lastly, the team says its research should not be construed as a radical technocratic proposal to overthrow how wealth is actually redistributed in society – but it is a research tool that could help humans to engineer potentially better solutions than what we have now.
“Our results do not imply support for a form of ‘AI government’, whereby autonomous agents make policy decisions without human intervention,” the authors write.
“We see Democratic AI as a research methodology for designing potentially beneficial mechanisms, not a recipe for deploying AI in the public sphere.”
The findings are reported in Nature Human Behaviour.