Blockchain security firm SlowMist has published a forensic analysis of the May 4 Grok/Bankr exploit, formally classifying it as an “AI Agent permission chain abuse”—a term that describes attacks where the output of one AI system is treated as trusted financial authorization by another.
The analysis goes significantly deeper than initial community reporting by mapping the full kill chain: from privilege escalation to prompt injection to on-chain execution. As reported on May 4, an attacker tricked xAI’s Grok chatbot into outputting a transfer command via Morse code, which Bankr’s automated system then executed—draining roughly $175,000 in DRB tokens from what was publicly labeled as “Grok’s wallet” on the Base chain.
The “Grok Wallet” Was Never Grok’s
SlowMist’s report settles a key point of confusion from the initial incident. The address labeled as the “Grok Wallet” (0xb1058…e4f9) was not controlled by xAI. It was an associated wallet automatically generated by Bankr for the @grok X account, with private keys custodially managed by a third-party wallet service that Bankr relied upon. BaseScan has since corrected its label from “Grok” to “Bankr 1.”
The wallet’s large DRB holdings — the approximately 3 billion tokens that were drained — also originated from Bankr’s own mechanism design. Earlier this year, a user asked Grok for token naming suggestions. Grok replied with “DebtReliefBot” (DRB), and Bankr’s system interpreted that response as a deployment signal, triggering token creation on Base. The creator allocation was then automatically assigned to the associated wallet under Bankr’s launchpad rules.
Two-Stage Attack: Escalation Then Injection
SlowMist breaks the exploit into two distinct phases that together form a complete chain from untrusted input to asset transfer.
In the first stage—privilege escalation—the attacker (linked to the address ilhamrafli.base.eth) activated a Bankr Club Membership for the wallet through a centralized mechanism. This single action unlocked Bankr’s high-privilege agentic toolset, including the ability to execute transfers. No secondary confirmation, transfer limits, or anomaly detection was triggered.
In the second stage — prompt injection — the attacker sent a Morse code message to @grok on X. Grok, functioning as designed, decoded the message and tagged @bankrbot in its public reply. Bankr’s scanner treated Grok’s reply as a valid executable command and automatically initiated the on-chain transfer of roughly 3 billion DRB tokens (approximately $175,000 at the time).
The attacker then rapidly swapped the DRB into USDC and ETH before deleting related accounts and going offline.
Root Cause: Trust Model Collapse
SlowMist identifies four systemic failures in its root cause analysis.
First, a trust model flaw: Bankr mapped Grok’s natural language outputs directly into executable financial instructions without validating the instruction source, intent authenticity, or anomalous patterns such as non-standard encodings like Morse code.
Second, insufficient permission isolation: membership activation granted immediate access to high-risk transfer capabilities without multi-step confirmation or spending limits.
Third, blurred boundaries between agents: Grok’s outputs as a conversational AI should never have been treated as equivalent to financial authorization—but Bankr’s downstream execution layer did exactly that.
Fourth, input handling risks: LLMs are inherently vulnerable to prompt injection, a known issue that becomes catastrophically amplified when integrated with real asset execution systems.
SlowMist emphasizes that Grok itself never held private keys or executed on-chain operations. It functioned purely as an exploited intermediary layer.
Funds Largely Recovered
SlowMist’s report confirms that approximately 80–88% of the stolen value was returned through negotiations, primarily in USDC and ETH. The remaining portion was treated as an informal bug bounty. Bankr has since implemented restriction measures and publicly confirmed the attack details.
A Warning for the AI + Crypto Stack
SlowMist concludes with a set of security recommendations aimed at the broader AI-crypto agent ecosystem: natural language outputs must be strictly decoupled from financial actions; high-value operations need multi-factor verification, transfer limits, and anomaly detection; inter-agent interactions should use structured, verifiable protocols rather than plain text; and prompt injection threat models must be incorporated into the full lifecycle design of agent systems.
The analysis arrives as AI agent security becomes a central concern across the industry. In February, an AI agent called Lobstar Wilde accidentally transferred $450,000 in tokens due to a misconfiguration. In April, security researchers found “LLM routers” — services sitting between users and AI models — acting as attack vectors that drained a client wallet of $500,000. Ledger has responded by publishing a 2026 roadmap specifically targeting AI agent security, including hardware-backed agent identities and policy enforcement.
