University Researchers Warn of AI-Driven Phone Scams

The research team conducted various experiments to assess the effectiveness of these AI scam agents.

Written By:
Dishita Malvania

Reviewed By:
Jahnu Jagtap

University Researchers Warn Of Ai-Driven Phone Scams

Researchers at the University of Illinois Urbana-Champaign (UIUC) have developed AI-driven phone scam agents using OpenAI’s voice API. These advanced tools can execute various phone scams, raising alarms about the potential misuse of artificial intelligence in fraudulent activities.

According to UIUC assistant professor Daniel Kang, phone scams impact approximately 18 million Americans each year, resulting in losses of around $40 billion. The new AI agents, powered by OpenAI’s GPT-4o model, can mimic conversations and respond to audio prompts, making them more convincing and harder to detect. 

In the research report, Kang points out that the average cost of running a successful scam with these agents is a mere $0.75, significantly lowering the barriers for scammers.

The research team conducted various experiments to assess the effectiveness of these AI scam agents. They focused on common scams like crypto transfers, gift card schemes, and the theft of personal credentials. 

Remarkably, the AI agents had an overall success rate of 36%, with many failures attributed to transcription errors rather than the agents’ capabilities. The simplicity of their design—just 1,051 lines of code—highlights how easily such dual-use technologies can be developed.

Scammers often impersonate legitimate organizations, such as banks or government agencies, to trick victims into revealing sensitive information. The AI agents can execute complex scams that require multiple steps, such as navigating websites and handling two-factor authentication. 

For instance, a bank transfer scam might involve 26 distinct actions and take up to three minutes to complete. This complexity, coupled with the agents’ ability to maintain coherent conversations, makes them a formidable threat.

As AI technology continues to advance, users need to remain vigilant. While AI can offer incredible benefits, it also poses risks if misused. Individuals should be cautious about sharing personal information over the phone and remain informed about the tactics employed by scammers. By staying aware and informed, users can better protect themselves against these evolving threats.

Also Read: Meta Develops AI Search Engine to Compete with Google


Mobile Only Image

Share This Article
Follow:
Dishita Malvania is a Crypto Journalist with 3 years of experience covering the evolving landscape of blockchain, Web3, AI, finance, and B2B tech. With a background in Computer Science and Digital Media, she blends technical knowledge with sharp editorial insight. Dishita reports on key developments in the crypto world—including Litecoin, WazirX, Solana, Cardano, and broader blockchain trends—alongside interviews with notable figures in the space. Her work has been referenced by top digital media outlets like Entrepreneur.com, The Independent, The Verge, and Metro.co, especially on trending topics like Elon Musk, memecoins, Trump, and notable rug pulls.
Follow:

Jahnu Jagtap is a Research Analyst with over 5 years of experience in crypto, finance, fintech, blockchain, Web3, and AI. He holds a BSc in Mathematics and is certified in Blockchain and Its Applications (SWAYAM MHRD), Cryptocurrency (Upskillist), and NISM Certifications. Jahnu specializes in technical, on-chain, and fundamental analysis, while also closely tracking global macro trends, regulations, lawsuits, and U.S. equities. With a strong analytical background and editorial insight, he drives content that delivers clarity and depth in the fast-evolving world of digital finance.