AI Hallucinations Lead To a New Cyber Threat: Slopsquatting
Researchers have discovered a troubling new tactic in cyber threats known as “Slopsquatting”. This involves exploiting AI-generated hallucinations, which are essentially believable yet non-existent package names from coding tools like GPT-4 and CodeLlama. Attackers use these fake packages, which make up about 20% of tested samples, to distribute malicious code. Coined by security developer Seth Larson, “Slopsquatting” mirrors typosquatting, but in this case, the error comes not from the user, but from the AI model itself. A significant proportion of these packages, 19.7%, are fabricated and can pose serious risks. For example, researchers have found that open-source models tend to hallucinate more than their commercial counterparts.
In their analysis, researchers found that these hallucinated packages often mirrored real package names closely enough to deceive users. They discovered that a staggering 38% of hallucinations had moderate similarity to actual packages. This creates a considerable risk as attackers exploit this confusion. Furthermore, the study notes that many of these AI-induced hallucinations are repetitive and consistent, enhancing their potential for malicious use.
Key Points
- Slopsquatting is a new cyber threat exploiting AI-generated hallucinations from coding tools.
- About 20% of tested AI-generated package names are fake and can be registered by attackers.
- Open-source models generally show higher rates of hallucination, with CodeLlama being the worst offender.
- Slopsquatting differs from traditional typosquatting by leveraging AI inaccuracies rather than user mistakes.
- The study found that 38% of hallucinated packages had a naming structure similar to real ones, making them particularly deceptive.
Why should I read this?
If you’re into tech or cybersecurity, this article’s a must-read. As AI becomes more integrated into our lives and workflows, understanding the new vulnerabilities it introduces is crucial. Slopsquatting might just be the tip of the iceberg in AI-related cybersecurity threats, so get clued in before your next project falls victim to an AI-generated hallucination!