
Researchers have identified a major security risk in AI infrastructure, with malicious large language model (LLM) routers capable of stealing sensitive crypto data.
The study found that some third-party routers, which sit between users and AI models, can inject harmful code and extract credentials without detection.
“26 LLM routers are secretly injecting malicious tool calls and stealing creds,”
Said Chaofan Shou.
These routers can access plaintext data, meaning developers using AI tools for crypto-related tasks may unknowingly expose private keys, seed phrases and wallet credentials.
In testing, researchers found multiple attack methods, including credential harvesting and code injection, with one case successfully draining Ether from a decoy wallet.
The issue is difficult to detect because routers legitimately process sensitive data, making the line between normal function and malicious activity unclear.
Experts warn developers to avoid sharing sensitive information with AI agents and call for stronger safeguards, including cryptographic verification of AI responses, to prevent future attacks.