Key Highlights
- AgentKit links AI agents to verified human identities using World ID and Coinbase’s x402 protocol.
- Websites can require proof-of-personhood before granting access or processing payments.
- The toolkit aims to curb bot abuse while enabling accountable automation online.
World Foundation, an identity project cofounded by Sam Altman, has launched a developer toolkit designed to ensure automated software acts on behalf of identifiable human users. The beta release, called AgentKit, integrates World’s identity system with x402, an open protocol developed by Coinbase and Cloudflare.
The goal is to allow AI agents to present cryptographic proof that they represent a unique individual when interacting with websites, APIs, or online platforms.
Addressing bot overload across the internet
Automated agents are increasingly performing tasks such as booking travel, purchasing tickets, comparing prices, and interacting with services at scale.
Platforms often struggle to distinguish legitimate activity from large bot networks capable of overwhelming systems or monopolizing limited resources. Developers say the new toolkit could help curb mass automation, for example, preventing thousands of software agents from competing for the same tickets or reservations.
How the identity link works
AgentKit allows users who have verified their identity through World’s biometric system to delegate their credentials to an AI agent.
World’s identity network, originally launched as Worldcoin, uses a device known as the Orb to confirm that each participant is a unique person.
Once linked, an agent can prove it represents a human without revealing personal details, using cryptographic verification rather than traditional identity disclosure. According to the project, the network includes tens of millions of verified users across more than 160 countries.
x402 enables verification before access
By incorporating the x402 protocol, websites can request proof of personhood before granting access to services or processing payments. This means platforms could reject requests from unverified bots or limit usage based on the number of real people involved rather than the number of automated processes.
Developers could also choose to combine identity checks with micropayments, creating a flexible framework for controlling automated traffic.
Broader context
As AI agents gain the ability to transact, browse, and act independently, verifying accountability becomes a growing challenge for online platforms.
Tools that bind automation to real users could help prevent abuse while enabling legitimate uses, shaping how identity, privacy, and trust evolve in an increasingly automated internet.
Also Read: Phantom Wallet Gets CFTC No-Action Nod on Broker Registration
