Crypto Times Logo Black
Google News Follow Banner
  • News
    • Market
    • Bitcoin
    • Ethereum
    • Altcoins
    • Regulations & Policies
    • DeFi News
    • Blockchain News
    • Industry
  • Exclusive
  • Opinion
  • Learn
    • Explained
    • How To
    • Insights
  • Podcasts
  • More
    • About Us
    • Our Authors
    • Contact Us
    • Editorial Policy
The Crypto TimesThe Crypto Times
  • All News
  • Market
  • Bitcoin
  • Ethereum
  • Altcoins
  • Regulations & Policies
  • Blockchain
  • DeFi
  • Industry
  • Exclusive
  • Opinion
Search
  • News
    • Market
    • Bitcoin
    • Ethereum
    • Altcoins
    • Regulations & Policies
    • Blockchain
    • DeFi
    • Industry
    • Exclusive
    • Opinion
  • Learn
    • Explained
    • How To
    • Insights
  • Quick Links
    • About Us
    • Our Authors
    • Contact Us
    • Editorial Policy
    • AI Policy
    • Sponsored & Advertorial Policy
  • Podcasts
Follow US
© 2026 By Crypto Times. All Rights Reserved.
Industry

SlowMist Labels Grok AI Bankr Hack a Permission Chain Attack

SlowMist confirmed that approximately 80–88% of stolen funds were recovered through negotiation.

Written By:
Dhara Chavda

Last updated: 19 minutes ago
Published 1 hour ago
Share
Last updated: 19 minutes ago
Published 1 hour ago
SlowMist Labels Grok AI Bankr Hack a Permission Chain Attack
Show AI Summary
Blockchain security firm SlowMist attributes the Grok/Bankr exploit to an AI agent permission chain abuse, where one AI system’s output is mistakenly trusted by another.
The ‘Grok Wallet’ was actually an associated wallet automatically generated by Bankr, with private keys managed by a third-party wallet service.
The exploit involved a two-stage attack, starting with privilege escalation through a centralized mechanism, followed by prompt injection that tricked xAI’s Grok chatbot into outputting a transfer command.

Blockchain security firm SlowMist has published a forensic analysis of the May 4 Grok/Bankr exploit, formally classifying it as an “AI Agent permission chain abuse”—a term that describes attacks where the output of one AI system is treated as trusted financial authorization by another.

The analysis goes significantly deeper than initial community reporting by mapping the full kill chain: from privilege escalation to prompt injection to on-chain execution. As reported on May 4, an attacker tricked xAI’s Grok chatbot into outputting a transfer command via Morse code, which Bankr’s automated system then executed—draining roughly $175,000 in DRB tokens from what was publicly labeled as “Grok’s wallet” on the Base chain.

The “Grok Wallet” Was Never Grok’s

SlowMist’s report settles a key point of confusion from the initial incident. The address labeled as the “Grok Wallet” (0xb1058…e4f9) was not controlled by xAI. It was an associated wallet automatically generated by Bankr for the @grok X account, with private keys custodially managed by a third-party wallet service that Bankr relied upon. BaseScan has since corrected its label from “Grok” to “Bankr 1.”

The wallet’s large DRB holdings — the approximately 3 billion tokens that were drained — also originated from Bankr’s own mechanism design. Earlier this year, a user asked Grok for token naming suggestions. Grok replied with “DebtReliefBot” (DRB), and Bankr’s system interpreted that response as a deployment signal, triggering token creation on Base. The creator allocation was then automatically assigned to the associated wallet under Bankr’s launchpad rules.

Two-Stage Attack: Escalation Then Injection

SlowMist breaks the exploit into two distinct phases that together form a complete chain from untrusted input to asset transfer.

In the first stage—privilege escalation—the attacker (linked to the address ilhamrafli.base.eth) activated a Bankr Club Membership for the wallet through a centralized mechanism. This single action unlocked Bankr’s high-privilege agentic toolset, including the ability to execute transfers. No secondary confirmation, transfer limits, or anomaly detection was triggered.

In the second stage — prompt injection — the attacker sent a Morse code message to @grok on X. Grok, functioning as designed, decoded the message and tagged @bankrbot in its public reply. Bankr’s scanner treated Grok’s reply as a valid executable command and automatically initiated the on-chain transfer of roughly 3 billion DRB tokens (approximately $175,000 at the time).

The attacker then rapidly swapped the DRB into USDC and ETH before deleting related accounts and going offline.

Root Cause: Trust Model Collapse

SlowMist identifies four systemic failures in its root cause analysis.

First, a trust model flaw: Bankr mapped Grok’s natural language outputs directly into executable financial instructions without validating the instruction source, intent authenticity, or anomalous patterns such as non-standard encodings like Morse code.

Second, insufficient permission isolation: membership activation granted immediate access to high-risk transfer capabilities without multi-step confirmation or spending limits.

Third, blurred boundaries between agents: Grok’s outputs as a conversational AI should never have been treated as equivalent to financial authorization—but Bankr’s downstream execution layer did exactly that.

Fourth, input handling risks: LLMs are inherently vulnerable to prompt injection, a known issue that becomes catastrophically amplified when integrated with real asset execution systems.

SlowMist emphasizes that Grok itself never held private keys or executed on-chain operations. It functioned purely as an exploited intermediary layer.

Funds Largely Recovered

SlowMist’s report confirms that approximately 80–88% of the stolen value was returned through negotiations, primarily in USDC and ETH. The remaining portion was treated as an informal bug bounty. Bankr has since implemented restriction measures and publicly confirmed the attack details.

A Warning for the AI + Crypto Stack

SlowMist concludes with a set of security recommendations aimed at the broader AI-crypto agent ecosystem: natural language outputs must be strictly decoupled from financial actions; high-value operations need multi-factor verification, transfer limits, and anomaly detection; inter-agent interactions should use structured, verifiable protocols rather than plain text; and prompt injection threat models must be incorporated into the full lifecycle design of agent systems.

The analysis arrives as AI agent security becomes a central concern across the industry. In February, an AI agent called Lobstar Wilde accidentally transferred $450,000 in tokens due to a misconfiguration. In April, security researchers found “LLM routers” — services sitting between users and AI models — acting as attack vectors that drained a client wallet of $500,000. Ledger has responded by publishing a 2026 roadmap specifically targeting AI agent security, including hardware-backed agent identities and policy enforcement.

Disclaimer: The information researched and reported by The Crypto Times is for informational purposes only and is not a substitute for professional financial advice. Investing in crypto assets involves significant risk due to market volatility. Always Do Your Own Research (DYOR) and consult with a qualified Financial Advisor before making any investment decisions.

Follow The Crypto Times on Google News to Stay Updated!      Google News
Google News Banner

TAGGED:Artificial Intelligence (AI)Crypto Hack
Share This Article
Whatsapp Whatsapp LinkedIn Telegram Copy Link
Dhara Chavda- Crypto Research Analyst at The Crypto Times
By Dhara Chavda
Follow:
Dhara Chavda is a Content Strategist and Research Analyst with 5 years of experience in the crypto industry. She holds a Bachelor’s degree in Computer Engineering and brings a strong technical perspective to her work. Dhara specializes in DeFi, price analysis, and the core mechanics of cryptocurrencies. She also works on crypto news, including research, analysis, and assigning stories, ensuring accurate and timely coverage of key developments in the space.

Latest News

Litecoin Releases 5th Core Patch in 2 Months After MWEB Crisis
Litecoin Releases 5th Core Patch in 2 Months After MWEB Crisis
Alchemy Pay Launches Alchemy Chain Mainnet to Accelerate Global Stablecoin Payments
Alchemy Pay Launches Alchemy Chain Mainnet to Accelerate Global Stablecoin Payments
SIREN Meme Coin Awakens from the Depths — Rallied ~50% in 24 Hours
SIREN Meme Coin Awakens from the Depths — Rallied ~50% in 24 Hours
Pantera Flags Weak Onchain Utility in $321B Tokenization Boom
Pantera Flags Weak Onchain Utility in $321B Tokenization Boom
Bitcoin Miner Core Scientific Stock Drops 7% After $347M Q1 Net Loss
Bitcoin Miner Core Scientific Stock Drops 7% in Pre-Market: Posts $347M Q1 Net Loss

Find Us on Socials

You may also like

BNY Partners With Finstreet and ADI to Launch Digital Asset Custody in Abu Dhabi

BNY Partners With Finstreet and ADI to Launch Digital Asset Custody in Abu Dhabi

Anthropic Brings AI Compliance Agents to Banking via Claude — Crypto Exchanges Are Next

Anthropic Brings AI Finance Agents to Banking via Claude — Crypto Exchanges Are Next

Today in Crypto: Tokenization Breakthrough, Clarity Act Deadline Set, Miners Pivot to AI Power Plays

Today in Crypto: Tokenization Breakthrough, Clarity Act Deadline Set, Miners Pivot to AI Power Plays

TrustedVolumes Exploit Drains $5.9M Through 1inch Liquidity System

TrustedVolumes Exploit Drains $5.9M Through 1inch Liquidity System

The Crypto Times Logo PNG

Providing real-time, accurate Crypto reporting. Your trusted source for Crypto News and Research.

Stay Updated

All News
Exclusive
Opinions
Learn
Podcasts

Company

About Us
Our Authors
Editorial Policy
AI Policy
Advertorial Policy

Get In Touch

Contact Us
Career

Find Us on Socials

X-twitter Linkedin Telegram Youtube Instagram

© 2026 The Crypto Times | A BITROCK TECHNOLOGIES L.L.C. Company.

DMCA.com Protection Status
  • Terms and Conditions
  • Disclaimer
  • Privacy Policy
  • Cookie policy
Do Not Sell or Share My Personal Information