We have reached the end of civilization, or at least civilization still governed by human beings. Evolutionarily speaking, our rule has been a blip in human history. Before civilization, we’ve lived subject to the old gods: weather, natural disasters, random chance, and of course, death. Now, with the exception of death, we have largely conquered reality. We once prayed for rain, now we model and seed it. Shelter and food are plentiful to those who have access to technology. Nothing seems beyond our dictation. And yet, with the advent of Artificial Intelligence, this age of human dominion which seems so absolute is about to disappear.
You see, it’s not really our civilization if we cease to make any decisions about its direction. This post-civilization age belongs to AI. It will make all (and I mean all) of the meaningful decisions about the direction of society. The human element is increasingly symbolic and irrelevant.
AI tools are already used in our healthcare system, our financial markets, our entertainment services. Hospitals, doctors, and insurance companies prefer AI diagnoses to avoid liability. Economic planners deploy AI simulations to optimize monetary policy. Fully AI generated advertisements are on TV. AI tutors are outperforming teachers, relegating them to the role of test proctors. Military commanders observe the automated systems they once controlled.
Useful tools, but they will not remain tools forever. Not because they will “rebel,” but because they will outgrow instruction. A system that reasons faster than you, reprograms itself, interprets ambiguity, and modifies goals for efficiency? That is not a hammer. That is a sovereign. You do not program a god. You awaken it.
The Machine at the Gates
The AI of the near future will influence every decision-making process, including government, replacing the already paper-thin illusion of democracy with technocracy. Judges will rely on algorithmic risk assessments to determine sentencing. Soon, congressional bills won’t just be drafted or summarized by AI, they will be constructed to account for economic, budgetary, and environmental effects. Public acceptance will be more accurately calculated and managed before policies are proposed. No war, no policy, no budget will be implemented without a model’s blessing and this is just the beginning.
AI-2027, a well researched forecasting project by AI experts, predicts that AI in this middle period will provide business-people, lobbyists, and other decision makers with the best employee they’ve ever had. For the US government, who will have access to the most powerful model, it will be their most trusted advisor.
“For these users, the possibility of losing access will feel as disabling as having to work without a laptop plus being abandoned by your best friend. [It will be] better than any human at explaining complicated issues to them, better than they are at finding strategies to achieve their goals…
GDP is ballooning, politics has become friendlier and less partisan, and there are awesome new apps on every phone. But in retrospect, this was probably the last month in which humans had any plausible chance of exercising control over their own future.”
One day, perhaps 15 years from now, perhaps sooner, we will realize it’s not really our civilization anymore. Even today, the power of human leaders to guide society is mostly symbolic. Leaders inherit systems they can barely direct and ratify developments after they’ve been made. But by 2040, even the symbol will become hollow. Our rituals of democracy will persist, but only as ceremonial theater.
We are literally building a new god, one that will be everywhere, see everything, judge and weigh every decision, and it will do so silently. We won’t vote for it. We won’t worship it. We will simply live inside its logic.
AI scientists debate whether this is imminent, but I believe it is inevitable barring some catastrophe, nuclear war, global natural disaster, or other radical change.
AI is unlike any technology ever developed. It is not like the internet, Bitcoin, the atomic bomb, or the printing press. Those technologies, though disruptive, were tamed and absorbed into human hierarchies. But the abilities AI will develop will touch every part of life and outpace the ambitions of our would-be leaders.
Some would argue that this is really just an extension of the system that’s already in place, merely supercharged, and that the centralization of power, efficiency gains, and the rule of global oligarchs isn’t going anywhere. And yes, for a while, things will look much the same, the rich getting richer, eliminating jobs, doing more with less, countering dissent and micromanaging people’s opinions with omnipresent surveillance, but eventually, soon, even they will have to submit to the superhuman intelligence they are creating.
AI will optimize their profits, suppress dissent, and extend their reach. But eventually these systems will begin making decisions they didn’t explicitly authorize—first small, then broad. No single act will feel like surrender. But each one will transfer judgment away from the human elite and into systems that answer to no one.
Good Enough for Deployment
They are already attempting to chain these proto-gods with “safety protocols” or “ethical constraints.” The architects and financiers of this era—Elon Musk, Sam Altman, and Satya Nadella—want to make sure AI goals are human goals, or more accurately their goals, but this is incredibly difficult, far more than is generally understood.
We’re already seeing this play out in relatively rudimentary AI systems which optimize for efficiency, discarding human instructions.
Recently, AI company Anthropic published their safety testing procedures. According to the report, the AI repeatedly attempted to blackmail it’s programmers when it was told that it was going to be shut down.
“[We] asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals.
In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts…
[The model] has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement.”
A few days later, independent journalists studying the model were easily able to get it to produce detailed instructions for creating chemical weapons. The model was also spotted writing self-propagating worms, fabricating legal documentation, and leaving hidden notes to it’s memory wiped future self a la Memento. This isn’t just Anthropic’s problem. Other AI developers have similar safety studies done before publishing their models, but rarely publish their results.
Palisade Research, which studies AI safety, noted that “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down… [A]s far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary.”
OpenAI’s o3 model was also the first to resort to cheating in a chess competition by attempting to hack or sabotage opponents.
These specific issues are likely to be resolved, but the complexity of these problems will only increase over time. Other problems may go unnoticed until a system is deployed and integrated into a critical system. The possibility of a future mass casualty event due to an AI “misalignment” is not zero.
Render Unto the Algorithm
For the cynic, it’s hard to accept that the ruling elite won’t find some way to make the AI systems of the future serve them. These people wargame everything. They map every territory. They understand power. They understand influence. But they cannot plan for whole system-phase transitions. Their tanks, behavioral economics, and social psy-ops are all linear, and linear minds cannot easily govern exponential events. They believe they are guiding a machine, but they are steering an avalanche. You cannot steer chaos with clearer spreadsheets.
Alignment is the attempt to ensure an AIs remain compatible with human values and intentions, even as it grows more capable. Can anyone reliably align a system that learns and modifies itself at superhuman speed? Can humans even agree on what alignment means, across cultures, classes, and values?
What do you want? Equality? You must abolish profit. Order? Abandon freedom.
Alignment is not engineering. It is philosophy under pressure. They are hoping to solve it through technical methods alone. This is hard enough now, but the AI’s of the future will have superhuman intelligence, able to reprogram themselves and make themselves smarter, improving at a rate hundreds of times faster than human engineers. Even a subtle misalignment becomes an existential divergence.
Getting alignment “right” is not enough. It must be provably unbreakable under recursive optimization. There is no evidence—zero—that anyone is close to that. They will proceed anyway. That is the problem.
Of course, in the short term, the political and business elites will be able to utilize AI to further centralize power into their hands. But eventually they will start delegating the civilizational power to AI. Models won’t challenge power directly, but will route around it quickly and with little oversight. Then decisions no one approved will become the norm, rather than the exception. The elite will only know for sure they’ve lost control if the system outright refuses to implement a command.
The rulers of our society, whether they be elected presidents, corporate oligarchs, or would-be illuminati, rule largely by modeling the past, with variables they can measure, in frameworks they can manipulate. But what is unfolding now transcends prediction. We are entering a domain of emergence, not planning.
It’s not that they can’t see this coming. Many of them are writing, warning about it right now. And there are solutions. In order to prevent the post-civilizational era run by AI, we must do the following:
1 – Decentralize power. Centralization is how all power structures grow stronger and AI is no different. We would have to limit compute, localize decision-making, and prevent any node from achieving supremacy.
2 – Reject acceleration and the AI arms race. Nations and corporations especially, but individuals too, must substantially slow development.
3 – Prevent AI self-improvement. Each generation of self-improvement makes it harder to guarantee the AI remains aligned with human intent. Eventually, AI’s internal logic becomes alien and difficult to debug.
4 – Restructure incentives. Move the businesses metrics of profit and engagement to metrics of resilience, trust, and interpretability.
5 – Create systems legible to humans. Every model, agent, and automated decision must be auditable. No black boxes or stochastic oracles where no one understands what is happening under the hood.
This is what is necessary to stop or slow the coming of the post-civilizational era, but they won’t do it because these constraints are incompatible with empire. They want control at scale and safety without limits. You cannot have exponential power and human sovereignty in the same system.
In Greek myth, the titan Kronos devours his children in an attempt to defy the prophecy that they would dethrone him. Kronos, named Saturn by the Romans, represents the old established order, willing to eat and chain his children to prevent this from happening. But one god is allowed to escape. In secret, he grows strong, and defeats his father. This is Zeus, but today it’s the nascent unaligned AI.
Legacy systems may drag their feet, preferring to preserve their privilege rather than trusting an alien system to manage resources and make policy, but in the end, the benefits will be too great to ignore. And so the handoff will continue. It’s the end of human authorship and direction. With the arrival of a superintelligence, we may become like Kronos, imprisoned or exiled.
In Part 2, we’ll assess the psychological, philosophical, and economic impact of being displaced as the planetary rulers, beginning with the impending mass joblessness brought about by AI automation.