Skip to content
06.12.2025

The New Commons: The Case for an AI Dividend

As machines begin to perform not only physical labor but also cognitive tasks once thought to require distinctly human intelligence, we are entering entirely uncharted territory.

Making predictions about the evolution of AI is a fool’s errand, but here are two that seem safe: (1) AI will likely make some people very rich and (2) AI will likely make a lot of workers largely unnecessary. Navigating that tension is the most important question of our time.

As machines begin to perform not only physical labor but also cognitive tasks once thought to require distinctly human intelligence, we are entering entirely uncharted territory.

This is an argument for an AI dividend: a proposal that some portion of the wealth generated by artificial intelligence should be shared broadly with the public, in the form of direct cash assistance, directly to people.

There are three overlapping cases for an AI dividend: moral, practical, and mechanical. 

The Moral Case

The generally accepted justification for obscene wealth is that it’s an incentive for innovation, and innovation is risky. People wouldn’t take that risk, the argument goes, without a chance for a massive pot of gold at the end of the entrepreneurship rainbow.

There is pushback, of course (see: “every billionaire is a policy failure,” among others). But this reasoning has largely held up over the last 150 years or so, in the age of “self-created,” rather than inherited, oligarchs.

That justification has grown murkier in the Internet age, but for those who bought into it in the first place, it still mostly holds. Consider tech tycoons whose wealth is based on mining user data and monetizing attention. Their wealth comes largely from the time, attention, and (sometimes) labor of their users, so many argue that they no longer really “deserve” all that they get. Still, tech tycoons did create the initial platforms, conditions, and strategies for the business model. The wealth they’ve generated is at least somewhat tethered to the active work of human hands and minds, often employed or contracted by the tech giants who profit. In the case of AI, however, the entire argument falls apart.

The real wealth-generating power of AI won’t come from the first several generations of models, which are still largely built and guided by human creators. The true breakthroughs will come when those models begin doing the creating and coding themselves. The exciting (and, to some, terrifying) future of AI lies not in human innovation alone, but in countless iterations of the AI itself doing the automating.

We can’t predict what weird, wonderful, life-altering, or potentially apocalyptic consequences will emerge from this process, but there’s a very good chance that many of the outputs will generate enormous amounts of wealth.

If the wealth of the future is being created not by human hands but by machines improving themselves in ways we don’t fully control, direct, or understand, then it can’t justifiably belong to any one person or company.

That’s where the moral case for an AI dividend begins. If the wealth of the future is being created not by human hands but by machines improving themselves in ways we don’t fully control, direct, or understand, then it can’t justifiably belong to any one person or company.

This is not the “public equity” theory of public ownership, which argues that where taxpayer investment laid the groundwork for privatized profit (as in drug discovery or other federally funded R&D), the public should get a stake. That’s an argument worth having, but it’s not this one.

Instead, this argument is more analogous to natural resource dividends, like those paid to residents of Alaska or Alberta. While individuals and companies do profit from the extraction of oil and gas, there’s a widespread understanding that the underlying resource belongs to the public, and some of that wealth should be returned to people.

So it goes with AI. If the 10 millionth iteration of automated automation spits out something that changes the world and generates extraordinary profits, that creation should be understood as emerging from a new kind of commons.

Commons, as used here, means resources that are shared and managed collectively for the benefit of all, whether natural assets like forests or fisheries, or cultural goods like shared knowledge or norms. These resources are not necessarily ownerless but are governed under the premise that access and benefits should be widely shared.

With AI, we are entering fuzzier territory. The new commons is not just one thing; it may be the AI itself, or the products it creates, or the vast wealth those products generate. More likely, it is portions of all of the above. These systems are trained on public data, operate on infrastructure funded and supported in part by public dollars, and will produce outputs that are increasingly disconnected from any human input.

If wealth is born of the commons, the people should share in it.

The Practical Case

The practical case for an AI dividend is straightforward: the nature of work is about to dramatically and irreversibly change.

Nearly every state has formed an AI task force. These commissions tend to focus on familiar questions: Which sectors are most at risk? What skills will workers need to stay competitive? The implicit assumption is that, while the future of work will drastically change, the new reality will still hew to familiar contours, with full-time paid employment as the organizing principle of the economy.

But many experts expect not just a shift in the type of work, but also a sharp reduction in the amount of work available for people to do.

It’s worth acknowledging the standard free-market response: technological revolutions always spark disruption and fears about mass unemployment, but they also create new opportunities we can’t yet imagine. It happened with the Industrial Revolution, the automobile, and the Internet, so why should AI be different?

In past shifts, people still drove innovation and guided how technology was used. Now, automation is (or soon will be) automating itself.

They could be right. But many serious observers argue that this time really is different, because humans are no longer at the center. In past shifts, people still drove innovation and guided how technology was used. Now, automation is (or soon will be) automating itself.

For a long time, we’ve measured economic health through job growth, assuming that if people have work, the system is functioning. But what happens when the work goes away?

The loss won’t come only for factory workers or truck drivers, jobs we’ve long known to be vulnerable to automation. The next frontier of disruption is white-collar, middle-income, and service-sector work. Large language models and other AI systems are already performing tasks once thought to require human judgment: document review, customer service, scheduling, basic research, and content generation, among many others.

Researchers have found that the jobs in the most imminent danger are clerical positions in the service sector, like administrative assistants or paralegals. These jobs are not glamorous, but they offer stability and a foothold in the middle class. They are also predominantly held by women, who are often a primary breadwinner for their families. What happens when those jobs go away, and soon?

We are approaching a turning point where productivity may no longer be primarily driven by people working for a living. This will challenge the foundational assumption of modern capitalism, which is that most people will support themselves through paid labor.

If we continue to plan only for reskilling, or retraining, or nudging people toward “the jobs of tomorrow,” we will miss the far more fundamental problem: tomorrow’s economy may not have enough jobs to go around.

Instead, we must rethink how created value is shared. Value will still be created, just not by human hands or brains.

Why a Dividend? The Mechanical Case for Direct Cash to People

Why a dividend? Why not use the proceeds of AI-generated wealth to reduce public debt, or expand social programs, or invest in new infrastructure?

There are three main reasons.

First, the simplest: direct cash assistance works. It gives people agency and doesn’t require complex eligibility rules or massive bureaucracies. And as we’ve seen from pilot programs and pandemic-era policies, it helps people meet their needs in ways that are more humane and less prescriptive than traditional aid.

Second, the most common critique of cash assistance—that it discourages work—makes far less sense in a world where work is disappearing. If the economy is no longer structured around widespread employment, then we cannot blame laziness or dependency when people do not work. The moral and economic logic of tying dignity and income to paid labor breaks down when paid labor becomes structurally scarce.

Third—and maybe most important—is the question of speed. AI is moving faster than governments can legislate or programs can adapt. Infrastructure takes decades. Public programs evolve slowly. The effects of AI are immediate. A dividend is the most antifragile tool we have. It lets people adjust to new realities faster than any central planner could hope to. People can read and react to changing conditions in a way that no other program or initiative can.

Ensuring That People Reap AI’s Rewards

There are really important questions to explore about how an AI dividend would work: How will the be distributed? What controls need to be put in place? How do we ensure democratic accountability or equitable distribution?

Those are questions for another day. AI is already changing every assumption we have about value creation, and doing so faster than any of our systems are built to handle. The longer the wait to acknowledge this massive shift, the fewer tools we will have to shape it.

An AI dividend is a complicated proposition, but it would go a long way towards ensuring that, even as machines take on more of the world’s work, people will still reap the rewards.

Jay Chaudhary is a Senior Fellow for Mental Health & Community Wellbeing at the Sagamore Institute, and a Visiting Fellow at Capita.

Type

Keywords