Pi Network co-founder Nicolas Kokkalis said the rise of AI-generated content, bots, and social engineering attacks is forcing crypto platforms to rethink how identity is verified online.
Speaking at Consensus Miami 2026 on Thursday, Kokkalis said the industry increasingly needs systems that can distinguish between humans and automated actors without requiring users to surrender unnecessary personal information.
He added, “So in some cases you need to know who exactly the person is like if someone goes to the bank and wants to withdraw money you really need to know who exactly the person is. In other cases, you need to know whether um what’s the act of uh that is being conducted whether it’s happening by a human or a bot.”
Kokkalis framed the issue as one of the most important infrastructure challenges facing crypto and internet platforms as AI tools become more sophisticated.
Proof-of-humanity beyond traditional identity checks
According to Kokkalis, proof-of-humanity does not always require revealing a person’s full identity. Instead, the level of disclosure should depend on the purpose of the interaction.
He said some situations, such as withdrawing money from a bank, require full identity verification. But in many online systems, the real need is simply proving that an action is being performed by a real person rather than a bot.
Kokkalis also pointed to online review systems and voting mechanisms, arguing that proof-of-humanity can help prevent a single actor from manipulating platforms through large numbers of fake accounts. He noted that Pi Network already operates with KYC-linked accounts across its blockchain ecosystem, describing it as a foundation for broader human-verification systems.
Privacy without “doxing” users
Kokkalis said privacy-preserving verification should become a core principle for digital identity systems. While advanced cryptographic methods such as zero-knowledge proofs often dominate discussions around privacy, he argued that simpler approaches can already achieve similar outcomes in practical applications.
To illustrate the point, he compared online verification with traditional ID checks used for age-restricted purchases. In many cases, users reveal far more personal information than necessary when showing identification documents.
Kokkalis said a trusted authority could instead issue a cryptographically signed statement confirming that a user is over a required age threshold without exposing details such as home address or exact birth date. Under that model, the recipient would only verify the authenticity of the credential rather than access the user’s underlying personal information.
AI pressure driving demand for human verification
The discussion comes as crypto and technology firms increasingly confront AI-generated impersonation, automated scams, and synthetic identities across online platforms.
Kokkalis suggested that systems capable of proving human uniqueness while limiting data exposure may become more important as AI agents participate more actively in digital economies. He added that balancing verification and privacy will likely shape how blockchain-based identity systems evolve over the coming years.
Also Read: Sen. Moody Says CLARITY Act Can End ‘Gotcha’ Crypto Enforcement
