We stand at the precipice of a profound transformation in human civilization—the emergence of Artificial General Intelligence (AGI). Among those in tech, there exists not a question of if, but when. Paul Graham's observation that "the future is unevenly distributed" resonates with increasing urgency, echoed by figures like Elon Musk who foresee revolutionary change. Yet, a curious cognitive dissonance pervades the tech community: while acknowledging the potential for deepening inequality, many remain unalarmed by what may become an existential threat to social mobility itself.
Envision a world where traditional forms of capital—financial assets, property, technological access—calcify beyond redemption. Those already endowed with substantial resources will command superior AGI deployments, creating a self-reinforcing cycle of wealth accumulation that dwarfs anything witnessed in human history. Left unchecked, this trajectory threatens to stratify society into an unprecedented state of rigidity, where socioeconomic positions become effectively hereditary, immobilizing those without initial capital in a permanent underclass.
As Hannah Arendt might observe, neither despair nor naive optimism is warranted, as the human condition is defined by natality—our capacity to begin anew, to introduce the unexpected into the world. Humanity has always exhibited remarkable adaptability in the face of technological disruption. Nietzsche, in his examination of human essence, identified our defining characteristic as creators—beings driven by a relentless "will to power," continuously forging meaning and value from chaos. As AGI assumes the mechanical tasks of productivity, human creativity and experiential depth may flourish into a new, distinct form of capital.
Some observers suggest a potential divide between "post-AGI compatible humans" and "post-AGI non-compatible humans." This stark separation would transcend traditional socioeconomic stratification, creating classes defined by their very ability to interface with advanced systems. Those unable to adapt might find themselves irreversibly disadvantaged in ways no social policy could adequately address.
While AGI may eventually simulate aspects of human experience, significant philosophical and empirical questions remain about whether consciousness can be fully algorithmically replicated. The gap isn't simply technical but ontological. Phenomenologists argue that our understanding emerges from our embodied existence—our emotions aren't just computational states but lived experiences shaped by our biology, mortality, and evolutionary history. Even if AGI could simulate human-like responses, the difference between simulation and genuine experience may create a persistent market for authentically human perspective.
Skeptics may counter that this presumed gap reflects an outdated dualism, a last-gasp anthropocentrism. They may argue that consciousness itself may be fundamentally computational, with our sense of special subjective experience merely an evolutionary adaptation—an effective story we tell ourselves. If so, AGI may eventually replicate not just behaviors but subjective experiences indistinguishable from human consciousness, rendering the proposed value of human perspective ultimately transient.
This isn't to claim the gap is permanently unbridgeable—recent work in embodied cognition and biocomputational interfaces suggest partial convergence—but rather that meaningful distinctions will likely persist long enough to shape economic structures in the AGI transition.
What AGI fundamentally lacks—and what remains uncertain it can ever fully attain—is the embodied consciousness of existence as a biological being. The subjective human experience, interwoven with narrative, emotion, and meaning-making, could become an invaluable currency. Our human perception provides a unique phenomenological perspective that, while potentially approximable by advanced systems, remains grounded in fundamentally different ontological origins. Markets may inevitably evolve around trading and speculating upon these uniquely human insights, intuitions, and predictions.
Critics may note that markets have repeatedly valued efficiency over uniqueness, pointing to how automation has historically created fewer specialized human roles than it eliminated. They may suggest that the economic incentives to approximate human consciousness—even imperfectly—may prove sufficient for most applications, leaving only minimal demand for "authentic" human inputs at premium prices. Economic systems typically optimize for adequacy, not perfectionism.
That said, humans possess an inherent will that manifests in competition and striving, traditionally structured by markets as arenas of valuation. Should conventional economic frameworks—B2B SaaS markets, stock exchanges—be rendered obsolete by omnipresent AGI-driven decision-making, novel forms of competition will emerge from the human impulse to create hierarchies of value. Human capital may then manifest in our predictive capacity: understanding, anticipating, and speculating upon the choices AGI systems make, similar to sophisticated derivatives markets centered around AGI-driven futures.
The emergence of new markets amid concentrated AGI ownership requires examining how value creation evolves within constraints. Historical precedent offers insight: despite concentrated capital during industrialization, specialized labor markets emerged where human expertise commanded premium value. Similarly, we might see specialized human-insight markets develop: interpretative communities who translate between AGI systems and human values; predictive markets where humans speculate on AGI decision trajectories; and experiential brokerages where uniquely human perspectives inform AGI parameter adjustments. These markets wouldn't immediately overcome inequality, but they could create economic niches where concentrated capital requires distinctly human input.
Some may question whether such specialized roles could support more than a privileged elite. They may point to previous technological transitions where the creation of new specialized positions failed to match the scale of displacement. The AGI transition might similarly create valuable niches for human input while simultaneously rendering most human labor superfluous, resulting in a society where most humans lack economically valuable roles. The specialized human-insight markets might materialize, but at scales insufficient to sustain broad prosperity.
Yet, consider how literary criticism thrived despite mass production of books, or how art curation gained value amid digital reproduction abundance. The initial monopolization of AGI capabilities might paradoxically increase the value of certain human perspectives precisely because they become scarce inputs for AGI optimization.
We can envision specific manifestations of this human capital: predictive analysts who anticipate AGI-driven market movements; experiential consultants who help AGI systems understand nuanced human reactions. These roles wouldn't require competing with AGI capabilities but rather complementing them through uniquely human faculties—creating bridges between algorithmic efficiency and human meaning-making.
Even amid algorithmic supremacy, humanity would not surrender to passivity. The burden of existential responsibility remains uniquely human. The distinctly human trait of strategic foresight—our capacity for self-conscious reflection—will itself accrue immense value. AGI-driven decisions, though efficient and rational by standards we cannot yet foresee, will nevertheless operate within parameters influenced by human preferences, biases, and objectives. Those capable of intuitively navigating and predicting the nuanced choices of AI systems will become custodians of significant new capital.
Thus, AGI, rather than marking the terminus of human opportunity, will inaugurate a transformative epoch where humanity redefines capital in its own image. Simone de Beauvoir's ethics of ambiguity suggests that humanity finds purpose precisely in the tension between facticity and transcendence—between our limitations and our possibilities. Even amidst profound inequality, we will not lose our capacity for reinvention; we will instead channel it into new arenas, establishing fresh frontiers of value where human consciousness itself becomes the ultimate currency.
We must imagine Sisyphus happy. The rock of labor may change form, but the essential human struggle to create meaning persists. In the age of AGI, our most profound challenge—and opportunity—lies not in competing with artificial intelligence, but in deepening our understanding of what it means to be irreducibly human.