Three authors argue predictive AI serves corporate power, not society
Summary
Algorithms now dominate human forecasting, driven by profit and mathematical rationality. This raises concerns about power and control, shaping our lives with potentially harmful consequences.
Three authors challenge the predictive AI industry
Three new books scheduled for release in 2025 and 2026 argue that the tech industry uses predictive AI primarily to exert corporate power and maximize profits. These authors claim that the current reliance on algorithmic forecasting creates a more restrictive world by replacing human judgment with statistical models. They suggest that society must reclaim control over data and decision-making to avoid an automated future.
Oxford economist Maximilian Kasy, UC Berkeley professor Benjamin Recht, and Oxford philosopher Carissa Véliz each provide a different critique of modern forecasting. Their work examines how supervised learning, mathematical rationality, and self-fulfilling prophecies shape modern life. These books collectively argue that technology is not an inevitable force but a set of choices made by specific actors.
The authors agree that the invisible layer of prediction currently grafted onto daily life serves the interests of Silicon Valley oligarchs. They suggest that algorithms do not just guess what will happen next; they actively push the world toward specific outcomes. This shift moves society away from intuition and toward a rigid, data-driven existence.
AI serves corporate profit margins
In his 2025 book, The Means of Prediction, Maximilian Kasy explains how supervised learning uses large, labeled data sets to guess future outcomes. Companies use these statistical patterns to determine who receives a mortgage, who gets a job, and who stays in prison. Kasy argues that these systems prioritize corporate bottom lines over individual well-being or social equity.
Kasy rejects the idea that AI harms are "unintended consequences" or simple alignment errors. He writes that algorithms promoting outrage on social media work exactly as intended because outrage increases ad clicks. Similarly, screening out job candidates with health problems maximizes profit by reducing potential company costs.
The economist asserts that making algorithms "fairer" will not solve the underlying problem. Because predictive models rely on past data, they naturally replicate racist and sexist patterns found in history. Kasy argues that profit incentives will always outweigh attempts to make these systems ethical or unbiased.
To counter this, Kasy proposes that society must establish democratic control over the fundamental resources of the AI industry. He identifies four key areas that require public oversight:
- Data: The raw information used to train models.
- Computational infrastructure: The physical servers and hardware.
- Technical expertise: The human knowledge required to build systems.
- Energy: The massive amounts of electricity needed to run data centers.
Kasy suggests creating data trusts where collective public bodies decide how information is processed and used. He also advocates for corporate taxing schemes that force companies to pay for the social harm their AI systems inflict. While he acknowledges that public trust in institutions is currently low, he argues that these structural changes are necessary to prevent total corporate control.
Mathematical rationality replaced human intuition
In The Irrational Decision, Benjamin Recht traces the current obsession with automated choice back to World War II military strategy. Scientists and statisticians used mathematical models to fight the Axis powers, leading them to believe that computers could serve as "ideal rational agents." This ideology, which Recht calls mathematical rationality, treats every human decision as a statistical optimization problem.
Recht explains that this belief system ignores the value of experience, judgment, and morality. Modern algorithms now manage supply chains, schedule flights, and place social media advertisements based on these 1940s-era theories. This approach treats life like a round at a casino where every action must maximize utility and minimize risk.
Figures like pollster Nate Silver and psychologist Steven Pinker champion this analytic mindset today. They argue that humans should learn to think and make decisions more like computers. Recht calls this idea ridiculous and points out that humanity achieved its greatest breakthroughs without formal decision theory.
Humans successfully developed major societal and scientific innovations long before the invention of predictive algorithms. Recht highlights several specific examples of progress achieved through human judgment rather than mathematical optimization:
- Life expectancy: Rose from under 40 in 1850 to 70 by 1950.
- Physics: Breakthroughs in thermodynamics, quantum mechanics, and relativity occurred in the late 1800s.
- Transportation: Engineers built cars and airplanes using intuition and physical testing.
- Governance: Societies created modern democracy without the help of automated decision models.
Recht argues that mathematical rationality cannot solve the world's most complex problems. He suggests that unquantifiable human traits like morality are better suited for addressing social issues. By reducing life to costs and benefits, society loses the ability to handle situations that do not fit into a spreadsheet.
Predictions act as self-fulfilling prophecies
Carissa Véliz argues in her 2026 book, Prophecy, that a prediction is often just a wish with the power to bend reality. She compares predictions to magnets that pull the future toward a specific, pre-determined outcome. When people believe a forecast and act on it, the prediction itself becomes the cause of its own success.
Véliz uses Moore’s Law to illustrate this phenomenon. In 1965, Intel cofounder Gordon Moore predicted that transistor density would double every two years. This did not happen because of a natural law of physics; it happened because the entire semiconductor industry spent billions of dollars to make it come true.
The tech industry uses these "prophecies" to distract the public from current problems. When executives promise that artificial general intelligence will solve all human struggles, they shift attention away from the labor exploitation and environmental damage AI causes today. These predictions function as orders that tell people how to behave and what to expect.
Véliz links the heavy use of prediction to authoritarianism and social oppression. She writes that when a society relies on algorithms to tell it what will happen, it gives up its agency. Believing a corporate prediction is often the same as obeying a command from that corporation.
The author concludes that technology is not destiny and that humans still have the power to choose their own path. She suggests that the most effective way to resist the predictive layer of modern life is to simply defy the algorithms. By making unpredictable choices, individuals can reclaim the future from the companies trying to script it.
These three books suggest that the fight for the future is actually a fight for power over the present. Kasy, Recht, and Véliz all urge readers to stop treating AI forecasts as objective truths. They argue that the goal of technology should be to serve human needs rather than to optimize human behavior for profit.
Related Articles

Snyk CEO Peter McKay steps down, seeks AI-focused successor
Snyk CEO Peter McKay steps down, saying the company needs an AI-focused leader for its next phase. He'll stay until a successor is found.

Pi for Excel adds AI sidebar to Microsoft spreadsheets
Pi for Excel is an open-source AI sidebar for Excel. It reads and edits workbooks using models like GPT or Claude, with tools for formatting, extensions, and recovery.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.

