One of the most visible effects of the web was the disintermediation of commerce. Distribution moved online, search engines and marketplaces rewired demand around traffic acquisition, and checkout optimisation became a science. But the intermediary never disappeared: it simply changed form.

We are now facing another paradigm shift: AI will not just influence commerce; it will restructure it. Discovery is already moving from Google to ChatGPT, price comparison is becoming machine-native, and execution will increasingly happen without a human sitting in front of a checkout page. Agents are emerging as the new interface layer between users and merchants and when the intermediary changes, payments must change with it. The real question is no longer whether AI will affect payments, but how payment systems evolve when the actor initiating the transaction is no longer human.
The goal of this post is to sketch what this new world looks like, identify the structural gaps that emerge, and evaluate the companies attempting to build the financial infrastructure of agentic commerce.
The case for agentic payments
A16Z recently mapped commerce across different categories of purchases, analysing the impact that AI will eventually have on different types of transactions.

In my opinion, routine and lifestyle spending (groceries, subscriptions, travel, clothes) are the most exposed. These are predictable, optimisable, SKU-driven decisions; AI does not just improve research and discovery here but it can realistically take over the execution of the transaction itself.
If that becomes reality, then the implications for payment service providers are structural: payments can no longer be designed around a human executing and, therefore, approving each transaction in real time. We move toward agent-initiated payments, where execution happens inside defined boundaries rather than through repetitive confirmation prompts.
What is effectively happening is that the unit of financial identity is changing. It is no longer a single individual holding a single account, approving transactions one at a time. In an agentic world, a person is a “corporation of one”: a principal with multiple delegates, each operating under scoped authority, drawing from segregated instruments and generating its own audit trail. The account becomes a multi-account entity and the card becomes a mandate.
This architecture is not new. Corporate finance built it decades ago: treasury management, virtual cards, spend controls, subsidiary accounts – all designed because a single account with a single approver cannot work when multiple actors need to spend simultaneously. But consumer finance never needed it. With AI agents, the distinction between how a company manages money and how an individual does is collapsing – every individual is becoming a mini corporation managing multiple synthetic workers. Payments shift from a series of individual approvals to a governed system running continuously in the background.
That model, however, assumed human actors inside defined organizations. Agents are neither employees nor legal entities, and that is where the complexity begins.
Key Problems to Solve
As execution moves from humans to agents, several structural frictions surface.
Trust signals for agents
When a human browses the web, they apply years of contextual heuristics: recognising legitimate domains, reading reviews, noticing scam signals. Agents lack this accumulated intuition. They need machine-readable trust signals to distinguish legitimate merchants from fraudulent ones.
What infrastructure is needed? Verified merchant registries, domain authentication standards, and structured attestation schemes that agents can query programmatically.
The broader data quality problem compounds this: if agents are trained or operating on a web full of fake reviews, SEO-poisoned recommendations, and inauthentic product data, they will make systematically bad decisions. Better provenance and authenticity infrastructure (structured, verifiable, machine-readable) is a prerequisite for reliable agentic commerce.
A few exciting projects stand out in this space: nekuda and PayOS. Nekuda is building a universal checkout infrastructure that optimises ecommerce stacks for various Agentic Commerce protocols and thus multiple AI touchpoints. PayOS has a more shallow positioning enabling both easier product discovery by agents but also enabling AI agents to securely checkout everywhere.
Protocols such as A2A, MCP, Visa TAP, UCP, AP2 and others will likely play an important role in solving this problem, but they deserve a deeper discussion in a future post.
Identity Delegation
The most important UX problem in agentic payments is the notification trap. If every agent transaction requires explicit human approval, the agent provides little autonomy value, it just generates a stream of push notifications that are more annoying than useful. But if the agent transacts with no oversight, trust collapses.
The solution space is identity delegation with structured guardrails: the human defines the policy (spend limits, merchant categories, time windows, transaction caps) upfront, and the agent operates within that policy without further interruption, with exception transactions escalating to the human.
This requires a new generation of authorisation infrastructure: expressive policy languages, delegation protocols that carry permissions cryptographically rather than through static tokens, and user interfaces that make policy configuration intuitive enough for non-technical consumers.
This is by far the most explored topic with multiple companies building under the KYA umbrella. A few notable examples are Skyfire, cheqd.io, Hovi, sapiom.ai. Their proposition ultimately is a service that creates a verified identity for your agents enabling account creation and authentication. Basically AI Agents can prove their identities.
The projects are interesting but most of them approach the problem from the agent outward: they ask how an agent proves its identity to a merchant or a service. The harder question runs in the opposite direction: how does a human principal establish, manage, and revoke the authority they have granted to each agent acting on their behalf? That is a delegation problem, not just an authentication one, and it remains largely unsolved.
Some early signals of how this could evolve are already visible. In Europe, the upcoming eIDAS2 framework introduces digital identity wallets and verifiable credentials that could allow agents to authenticate using trusted identity primitives. Another emerging consideration is device binding. Much of today’s AI agent infrastructure runs in the cloud, while payment authentication historically relies on hardware-bound trust anchors such as secure elements or trusted execution environments. Bridging this gap between cloud agents and device-rooted identity may become a key design challenge for agentic payments.
Guardrails
This problem is closely connected to the Identity delegation. Once you share your identity and allow them to execute transactions on your behalf, you will also need to set some operational boundaries. Even a fully trusted agent requires constraints: budget limits, scope restrictions, approval thresholds, time limits, and auditability.
Most of these mechanisms already exist in spend-management systems designed for human employees. Extending them to AI agents introduces enormous complexity, particularly from legal and regulatory perspectives.
Financial infrastructure will need to support a new hybrid organisation: a human assisted by dozens – or potentially hundreds – of synthetic workers. This is something regulators around the world will struggle to interpret within existing frameworks.
A few projects are starting to explore this space.
Catena Labs is by far the most ambitious agentic banking project: they are building an AI native financial institution: a regulated entity designed from the ground up for AI Agents and their human collaborators. Their first product is an Agentic Commerce Toolkit.
Stripe, as I already presented in a previous post, is positioning Stripe Issuing as a way to create scoped virtual cards per agent or per task. Each agent can receive a segregated instrument with predefined limits, merchant controls and reporting layers. In effect, Stripe is exporting corporate spend-management primitives to the agent layer.
Fraud
Fraud risks are enormous. Agentic payments introduce new attack vectors that existing systems were never designed to handle, and the nature of the technology makes them easy to deploy at scale.
Fraud from agents involves malicious actors deploying agents to perform card testing, credential stuffing, or automated abuse. An agent capable of making thousands of transactions per second represents a completely different scale of threat.
Fraud against agents includes prompt injection attacks, malicious merchants exploiting an agent’s lack of human intuition, or spoofed merchant identities intercepting transactions.
Existing fraud models rely on human behavioural signals such as velocity checks, geographic anomalies and device fingerprints. In an agentic world, those assumptions break down.
Risk models must be rebuilt with agent behaviour as a first-class input.
The risk of fraud becoming systemic is, in my opinion, very high and this creates a significant opportunity for both incumbents and new entrants.
Old or New Rails
For now, the underlying rails remain largely the same: card networks, ACH and SWIFT, etc. But these systems are designed around human-initiated transactions, with authentication, dispute resolution and liability models that assume a person at one end. Adapting them to agent-initiated commerce requires new primitives for agent authentication, revised risk frameworks and possibly explicit network-level signals identifying agent-originated payments.
Stablecoins introduce a more structural possibility: they are not just another payment method; they could become agent-native rails with programmable money and wallet-to-wallet settlement, guardrails can be enforced by code rather than by human approval, and the very concept of checkout begins to dissolve.
But a deeper question remains: in a truly agent-first world, is the existing infrastructure even sufficient?
Today, Visa advertises peak throughput of ~80,000 transactions per second. That sounds enormous, until machines start transacting with machines. If millions of agents are negotiating prices, splitting payments, executing microtransactions, rebalancing subscriptions and interacting with APIs in real time, even 100k TPS may not be excess capacity. It may be a constraint.
Some projects are pushing this logic to the extreme. New infrastructures such as Radius propose machine-native payment rails capable of millions of transactions per second, explicitly designed for autonomous agent activity rather than human commerce.
Conclusions
As agentic payments scale, the shift is broader than consumer finance. It is a transition from human-centric payments to agent-centric payments. The traditional model – a person approving transactions one by one – gives way to “corporations of one” where multiple agents operate under scoped mandates, segregated instruments and programmable limits.
This brings completely new challenges. Some of them are already well resourced, others remain largely unexplored.
Building and signaling trust to agents is a space that could grow massively. If a huge chunk of online commerce will pass through AI Agents, then basically every online shop will be expected to offer an AI Agent friendly experience, something that most ecommerce infrastructure does not support today. The Shopifys of the next decade will build an ecommerce platform for agents first and humans second.
Identity infrastructure, on the other hand, is evolving quickly under the “Know Your Agent” narrative. However, the delegation layer – the financial enablement of AI agents under the umbrella of their human principal – remains the least understood and most structurally important problem. Many companies are building tools that allow agents to verify who they are. Far fewer are addressing the deeper challenge of how authority is structured, governed and revoked across a hybrid organisation composed of individuals and synthetic workers.
This gap creates an opportunity to build new financial institutions and PSPs designed for this model. The next Revolut may not only open accounts for people, it may natively open accounts for their agents.
Fraud is already a familiar battlefield for incumbents, but agentic payments significantly expand the attack surface. Autonomous agents enable abuse at machine scale, while new vectors such as prompt injection introduce risks that existing fraud models were not designed to handle. Systems built around human behavioural signals will need to evolve to account for machine-driven activity.
Last but not least, the question of which rails will ultimately support this new wave of technological change remains genuinely open. Building on top of existing rails is the most obvious starting point, but it could easily become a dead end once the agentic economy reaches scale.
The deeper battle around rails may ultimately concern control over transaction data rather than settlement itself. Banks rely heavily on transaction data to power credit models, fraud systems and customer intelligence. If agentic commerce shifts execution toward new wallets, protocols or settlement layers, control over this data may move away from traditional banking infrastructure, something incumbent institutions are unlikely to accept easily.
The direction of travel is clear: commerce is becoming machine-executed, and payments infrastructure will have to follow. My opinion is that the winners will be the ones that will understand how to properly model this new organisational paradigm of “corporation of one” and make it easily available to the market.
Thanks to Alistair Hughes, Thomas Mota and Akash Bajwa for their kind reviews.
