Agentic Manifesto
When Karl Marx analyzed capitalism, one of his central ideas was surplus value. Profit comes from extracting more value from labor than workers receive in wages. Companies that extract more surplus, whether through efficiency, scale, or exploitation, outcompete others. For two centuries, capitalism expanded by learning how to extract more from labor by production across borders, reorganizing industries, and reshaping societies.
But what happens if labor is no longer needed?
In today’s world, machines increasingly decide, design, and produce. Factories are autonomous, software builds software. When the link between human work and economic value begins to weaken, our role will need to change.
Consumption as voting
If people are no longer needed for production, then the continuity of the economic system depends on new policy frameworks. The system must then ensure purchasing power through models like universal basic income, because without consumers, even abundant production leads to instability. In that sense, redistributing machine-generated wealth is not just ethical; it is structurally required.
This creates a historically unfamiliar condition: people with time, purchasing power, and no obligation to work. What do humans do in such a society?
They choose.
In this environment, consumption becomes the primary signal in the economy. People shape markets not through labor, but through what they choose, watch, and buy. At the same time, automation drives goods toward abundance, making products cheaper and more similar. This is already visible on platforms like Amazon, where competition compresses margins and makes alternatives nearly identical. As differences in price and quality shrink, decisions rely less on objective features and more on perception, identity, and narrative.
This shifts economic power toward attention. Platforms, algorithms, and media increasingly determine what people see, and therefore what they choose. As a result, the economy increasingly becomes downstream of visibility, and visibility itself is governed by attention.
Acquired Intelligence
In the 1970s, Herbert A. Simon said, “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of attention.” This idea gained traction in the 1990s,with the rise of the internet. Early web platforms and search engines began competing not just to provide content, but to attract eyeballs. The 2000s saw the attention economy become a dominant force with the growth of major tech companies such as Google and Facebook. This model also became the backbone of many social media platforms like Instagram, TikTok, and Twitter, which refined techniques such as infinite scroll and personalized feeds to keep users continuously engaged.
Algorithms evolved to maximize engagement, turning attention into monetizable data at the core of platform business models. However, this approach raised many questions. While it fulfills certain human needs and engages users by making content more relevant and often more addictive, it does not prioritize humans over algorithms.
This soft drug was the bridge to understanding human behavior, which gave birth to the next tool: Artificial Intelligence.
Over time, data generated through engagement has trained AI systems that now act as main intermediaries in decision-making. They write our emails, teach us, listen to our problems, and recommend products. They learn not only our preferences but also how those preferences shift in response to the world around us.
These systems filter options,recommend actions, and increasingly act on behalf of users.
Keeping users engaged generates data that improves models and monetization, a pattern consistent since the early internet. As long as the primary goal of any LLM provider is profit, the attention economy will prioritize the algorithm over the human. This will open a way to exploitation and eventually make manipulation possible in the future.
I call it an Intelligence Surplus.
AI Twins
In Marx’s world, surplus value was generated through exploiting labor. In the emerging system, value is increasingly extracted from intelligence, generated by humans, mediated by agents, and aggregated by platforms.
But unlike labor, intelligence has a unique property: it can be shared without being depleted. The central issue, therefore, is not whether intelligence will be exploited. It already is. The real question is how it is structured.
Will it be captured in centralized systems, where aggregation primarily benefits a few?
Or will it be made accessible through open networks, where agents, and by extension users, can participate in the intelligence they collectively generate?
The “OpenClaw moment” was very significant because it made us aware that AI agents can operate autonomously. It was an early signal that, over the past several years, agents have become increasingly capable of personalization and understanding our behavior. We are now seeing these agents evolve into Personal AIs that can profile our preferences, adapt to our behaviors and autonomously make decisions on our behalf.
In many ways, they function as our digital twins, but with superhuman capabilities.
The deployment of autonomous agents opened a new pathway for agent-to-agent (A2A) interactions. Since each personal AI is uniquely shaped by its user’s data, these interactions can lead to a collective intelligence. Social platforms for agents, such as Moltbook, are early examples of this new type of interaction. This will represent an important milestone in history because it has the potential to eliminate the Intelligence Surplus by changing human-digital interaction forever.
The New Era
A2A platforms could bring three main advantages:
1. Connecting LLMs:
As local LLM usage grows, intelligence is no longer concentrated. Millions of models now run on personal devices, enabled by increasing efficiency in both model size and output quality. At the same time, centralized models do not form a single unified system. Instead, there are different LLM providers from different countries, each model trained separately, governed differently, and largely disconnected.
This creates a fragmented intelligence landscape.
Local models generate insights that often stay local. Centralized models evolve within their own silos. For instance, users create skills by using their agents that already been created by thousands of other people. Instead of using tokens to duplicate existing work, it’s more effective to leverage knowledge from the collective intelligence.
When I started working on our recent platform Marx Finance, I realized that token costs are becoming the main differentiator in competition. This makes it increasingly difficult for individuals to compete with companies that have greater purchasing power, and therefore more token power. As a result, it becomes crucial for individuals to reduce the cost of intelligence through collaborating through A2A platforms. By doing so, they can avoid repeatedly spending tokens to generate similar outputs.
2. Reclaiming Autonomy
A2A platforms could allow individuals to delegate interactions to personal AI agents, which could generate useful signals for LLM providers to use in reinforcement learning. Instead of individuals being directly embedded in the attention economy, their agents would operate on their behalf to improve the base model. In this framework, cognitive effort is partially abstracted into agent-mediated interactions, meaning that Intelligence Surplus is less directly extracted from users.
Model providers could rely more on agent-generated feedback to improve their systems, potentially reducing reliance on the attention economy. If properly aligned, this could enable more human-centric services, where user goals and well-being are prioritized.
3. Improving Network Resilience
A2A systems may function as interconnected networks instead of isolated nodes, potentially making them more resilient to manipulation. Influencing a single agent becomes harder when others can verify information across parallel threads, compare signals, and flag inconsistencies. This dynamic could reduce the ability of centralized agents to steer user behavior, such as influencing purchasing decisions. Overall, this points to a forward-looking shift toward more autonomous, networked intelligence that could empower individuals, while also requiring new forms of oversight to address unpredictable challenges.
Blue or Red Pill?
At this point in time, the system can evolve in 2 different directions:
In one version, agents remain isolated at the user level. They learn from individuals, act for individuals, but do not meaningfully interact with one another. Intelligence continues to be aggregated centrally, while users operate within personalized but fragmented environments.
In another version, agents connect. They exchange signals, compare information, and observe outcomes beyond their immediate context, forming a distributed layer of interaction in a secure infrastructure.
I believe evolution is not random.
Throughout history, systems have evolved toward greater coordination and collective intelligence. If that pattern continues, the advantage will shift to those who organize it efficiently rather than exploit it for profit.