Skip to content
06.05.2025

The Long March of the Machines

AI companions and the final triumph of machine culture

“In the past the man has been first; in the future the system must be first.” So begins Frederick Winslow Taylor’s epoch-making book The Principles of Scientific Management. The book, published in 1911, was written to address some of the nascent problems of the Industrial Revolution, especially the problem of human motivation—how to make employees care about work whose materials, tools, designs, products, and profits they did not own. This concentration of ownership had been unheard-of in earlier phases of American economic life, and from its advent, it seemed to doom workers and owners to perpetual warfare: the worker’s natural incentive was to work as little as possible, for as much money as possible; the owner’s whole business model was to pay as little as possible, for as much work as possible.

Taylor, a mechanical engineer, set out to end this war by re-engineering work and the people who did it—he laid out the blueprints for a system in which human agency would be adapted to fit the logic and rhythm of the machine. Workers, in this dispensation, could either shape themselves into compliant appendages of hulking, clanging mechanisms or they could be paupers. It is far from clear that Taylor succeeded in making workers care. But he most certainly succeeded in making human labor more machinelike—rote, repetitive, uncreative—ultimately paving the way for the replacement of human laborers by machines. The recent growth of artificial intelligence is poised to complete the revolution Taylor started, pushing further than he could have imagined.

As Taylor set out to understand and solve the problems of industrial production, there was one thing that the workers still owned: their methods of working. He complains in the early pages of The Principles of Scientific Management that each factory floor seems to play host to innumerable ways of doing things. Individual workers inherited methods from their predecessors and colleagues, then added their own tweaks. This, Taylor thinks, is madness. Impressed by the precision of modern science, Taylor feels certain that there is, if we look hard enough, one single perfect way of doing each task. In pursuit of these best practices, he calls for a new type of manager. This manager will comb carefully through the methods employed on the factory floor, interview their practitioners, sift the good from the bad, and eventually come to a definitive conclusion.

This conclusion in hand, the manager will codify best practices, and not only mandate them for all members of the team, but also choreograph each moment of each worker’s day. When I arrive in the morning I will receive my day’s marching orders, with all of my movements and gestures accounted for. There will be no room in my day for creativity, personality, taste, or judgment. This concentration of agency in the hands of the managers is done for the sake of efficiency. The factory, Taylor explains, will simply be able to produce more things more quickly if each man has only small, simple, machinelike tasks to execute. There is, however, another motivation: in this system, there is no need for long experience or intensive training. Workers thus become cheap and easy to replace. This is especially important because the machines are expensive and need to be run day and night, at full capacity, to justify their cost. For this, one needs a large and liquid labor pool.

It hardly needs saying that this robotic way of working was, from the beginning, immensely unpopular among workers. One of the earliest and most enthusiastic adopters of Taylorism was Henry Ford. He used Taylor’s principles to conceive the assembly line, in which workers went from making whole cars to executing a few small tasks, over and over, all day long. Its introduction caused a massive exodus of workers, who resented being treated like machines, deprived of the dignity that came from exercising their creativity and judgment in the process of seeing a project through, from start to finish. For a time, turnover was so high that Ford had to hire approximately ten workers for every job he needed to fill. He responded, eventually, by doubling worker wages. With the leverage of high wages, he was able to triple the speed of the conveyer belt. American manufacturing carried on in this mold until, in the fifties and sixties, managers were able to replace robotic human labor with actual robots, who could do simple, repetitive tasks more efficiently than human workers. The roboticization of American labor was, it turns out, a prelude to full replacement.

Under this system, management was meant to be a safe preserve for authentically human work. Managers were employed to think, to reason, to create and orchestrate. They gathered and interpreted the data, and issued the orders. Prestigious institutions, like Harvard Business School, Penn’s Wharton School and the Stanford Graduate School of Business grew up to train and anoint them. Beginning in the seventies and eighties,  however, a new theory of management began to take shape. Known as shareholder primacy, it decreed that the sole legitimate function of a manager was to maximize shareholder profits. Management had previously been imagined as a more creative, broad-minded vocation. Now, though, even elite managers had one simple task, which was set for them, not by them. Their performance could be clearly evaluated by spreadsheet and calculator. 

“Data-driven” leadership became the ideal. Managers began to defer to the findings of the machines, to manage in the way that algorithms dictated. This deference to the judgments of data-sifting machines has been steadily spreading to more and more corners of American life. Not only business, but also government, popular entertainment, and education are increasingly expected to follow where big data leads.

The stage was set for another machine takeover, and in the early 1990s, the machines took a big step in that direction. We began to see the emergence of “big data,” as computers became increasingly able to compile, organize, and even, to a certain degree, interpret massive piles of business-relevant information that exceeded humans’ capacity to process, organize, and interpret. “Data-driven” leadership became the ideal. Managers began to defer to the findings of the machines, to manage in the way that algorithms dictated.

This deference to the judgments of data-sifting machines has been steadily spreading to more and more corners of American life. Not only business, but also government, popular entertainment, and education are increasingly expected to follow where big data leads. The result has been a drive towards standardization that cashes out in mindless “best practice” bureaucracy, fealty to standardized tests, and endless repetition. New, creative ideas are not the province of the machine, and so they are, these days, in short supply. This is the age of the sequel and the prequel.

It is into this context that AI has recently emerged with such force. Like the assembly line workers who were easily replaced by robots, our leaders and “creatives” have been groomed for their own replacement. It is notable that the first group to mobilize against the job-destroying potential of high powered LLM’s was Hollywood script writers. In the spring of 2023, the Writers Guild of America West went on strike and forced the major Hollywood studios to promise that they would not replace them with robots. This is a striking demand. ChatGPT can only take in, process, and remix what humans have already made. It is not capable of genuine originality and is therefore only a threat to writers trained to churn out rote, repetitive scripts. If your movies are not the type of thing that robots could make, then robots will not be able to do your job. The fact that machines now seem able to do work that we have historically considered creative and artistic is a major development. But it is not just a miracle of technology; it is also a story of the degradation of human activity.

Thus far, with every move we have made to mold human actions to the demands of machines, we have always imagined some further realm of action that machines will not, ever, be able to colonize. Art used to be one of them. No longer, it seems. But now, perhaps, we can fall back to the safest of all territories—human relationships. This seems to be the one thing that, really, machines cannot possibly be expected to do for us. 

And yet, here we are. There is an explosion of AI companions, boyfriends and girlfriends, AI-powered sex dolls, AIs to call your mother and see how her week is going, lonely souls claiming that Chat GPT is their best friend. Of course we should resist this. Of course any breathing human can tell the difference between an AI “friend” and a friend. The latter will always be better—real friends will always challenge, hurt, affirm, delight, and protect us in ways that no robot can. And yet, for a long time we would have said something similar about the creative arts. Before we got to the point where ChatGPT might plausibly write our movies for us, movies had to become the kind of thing ChatGPT could conceivably write. The question is: are our relationships moving in a similar direction? Are we beginning to interact in ways that robots might be able to replicate?

There is some evidence, to begin with, that pornography, serving as an easy substitute for sex, has begun to impact the ways that people have—or fail to have—sex with other people. Porn use is low-risk and low investment. The user need not worry about rejection, heartbreak, disappointing performance, disease, or pregnancy. For some people, porn is just good enough to drain the urgency from their desire for a sexual partner. And if machine sex gets a lot better—as the makers of AI partners and sex dolls promise—a number of vulnerable, lonely people could be seduced into foregoing romantic relationships altogether.

Outrage sells well. Thirst traps do too. Glittering images of wealth and luxury are reliable drivers of engagement. None of this steers users towards the honest, open-hearted interaction that is the lifeblood of real relationships. Social media interaction, like porn, tends towards the fantastical and frictionless.

Sex is, of course, just one part of human relationship. Have our broader modes of companionship become machinelike? Are we preparing ourselves to be replaced? The happy news is that most relationships are too sensitive, too embodied and subtle for that. Still, dangers lurk. One place to look for them is the activity that passes for sociality on social media platforms.We know that social media encourages and rewards certain behaviors. User engagement is the currency of that realm, the thing that platforms sell to advertisers, and the source of the dopamine hits that keep users coming back. Platform algorithms therefore tend to elevate posts that are well-calibrated to drive engagement. Outrage sells well. Thirst traps do too. Glittering images of wealth and luxury are reliable drivers of engagement. None of this steers users towards the honest, open-hearted interaction that is the lifeblood of real relationships. Social media interaction, like porn, tends towards the fantastical and frictionless.

Perhaps, as generations of Americans are enculturated into this mode of interacting, we will become accustomed to uncreative, unsurprising, solipsistic flatness in our real-life relationships as well. Perhaps we will become incapable of the flexibility, perceptiveness, and creativity required to maintain real human relationships; our sociality will be brought into alignment with the machines. We will become the kinds of things that can be replaced, for one another, by AI. This is the dystopic worst-case scenario, and it must be resisted in any way possible. In the realm of work, having robots take over robotic tasks can perhaps free humans for the kind of creative work they were always best suited for. But it’s difficult to see how turning companionship robotic is anything but a tragedy. 

What would resistance to this dystopic path look like? The main thing is a very big thing. We need to finally shed the illusion that we can do without thick moral judgment in our public, professional, economic and political lives. Online engagement does in fact generate tremendous profits. The lonelier and needier we get, the more we tend to engage, and to patronize the advertisers who are paying for our attention. If all we ask of American business is to turn a tidy profit, the engagement model is a runaway success, whatever it may do to American sociality. In this model, the fact that our engagement economy is a recipe for unhealth, despair, sickness, and polarization is immaterial. Those are “negative externalities” which are a problem for people to deal with at home or church, with therapists or friends. 

This cannot continue to be conventional wisdom. Our leaders need to be liberated to say, No, you can’t warp and maim Americans’ ability to relate to one another, even if , at point of use, they want the destructive product you’re selling. If we could get this main point right, then the details would be easy enough to engineer. If, say, social media platforms felt constrained by the force of government and public opinion to optimize for genuine interpersonal encounters, their legions of engineers could surely find a way to push in that direction. We do not lack the technical ability to keep companionship from turning machinelike, only the imagination and will. The machines have been making their long march through human institutions for well over a century. Unlike them, we have the capacity to decide whether they should continue, or whether there are some places where only humans should go.

Ian Marcus Corbin is a Senior Fellow at Capita.

Type