Threat actors are abusing X’s generative AI bot Grok to spread phishing links, according to researchers at ESET. The attackers achieve this by tricking Grok into thinking it’s answering a question, and providing a link in its answer.
“In this attack campaign, threat actors circumvent X’s ban on links in promoted posts (designed to fight malvertising) by running video card posts featuring clickbait videos,” ESET says.
“They are able to embed their malicious link in the small ‘from’ field below the video. But here’s where the interesting bit comes in: The malicious actors then ask X’s built-in GenAI bot Grok where the video is from. Grok reads the post, spots the tiny link, and amplifies it in its answer.”
The researchers found hundreds of accounts using this technique, with their posts receiving millions of impressions. Since Grok is a legitimate tool, these posts also received amplified SEO results.
While ESET’s report focuses on Grok, the researchers note that this same technique could be applied to any generative AI tool.
“There really is an unlimited number of variations on this threat,” the researchers write. “Your number one takeaway should be never to blindly trust the output of any GenAI tool. You simply can’t assume that the LLM has not been tricked by a resourceful threat actor. They are banking on you to do so. But as we’ve seen, malicious prompts can be hidden from view – in white text, metadata or even Unicode characters. Any GenAI that searches publicly available data to provide you with answers is also vulnerable to processing data that is “poisoned” to generate malicious content.”
KnowBe4 empowers your workforce to make smarter security decisions every day. Over 70,000 organizations worldwide trust the KnowBe4 HRM+ platform to strengthen their security culture and reduce human risk.
ESET has the story.