Grafana Patches AI Bug That Could Have Leaked User Data
Summary
Security researcher Noma disclosed “GrafanaGhost,” a prompt-injection-style vulnerability in Grafana’s AI features that could have allowed attackers to exfiltrate sensitive information. The flaw exploited how Grafana’s AI processed indirect prompts — notably via image tags and protocol-relative URLs — and used the “INTENT” keyword to bypass model guardrails. Grafana patched the image renderer in its Markdown component after coordinated disclosure; the vendor and Noma disagree on how “silent” or “zero-click” the attack would have been.
Key Points
- Researchers at Noma discovered an indirect prompt injection (GrafanaGhost) that targeted Grafana’s AI assistant.
- The attack used image tags and protocol-relative URLs to bypass domain validation and inject commands that the AI treated as benign context.
- Attackers could store a malicious indirect prompt (for example in logs) so it would execute automatically when Grafana’s AI later processed that content.
- Noma says the AI processed the payload autonomously and exfiltrated data without warning the user; Grafana disputes the “zero-click”/silent-execution characterisation.
- Grafana patched the Markdown image renderer quickly after responsible disclosure and reports no evidence of exploitation in the wild or data leaks from Grafana Cloud.
- Organisations using Grafana should patch, review access to AI features and stored content (logs, images), and restrict external content rendering to reduce risk.
Why should I read this?
If your organisation uses Grafana (or any UI with embedded AI features), this is the kind of sneaky trick that can quietly pull data out of places you think are safe. The write-up shows exactly how images and tiny URL tweaks can turn into data-leaking prompts — and yes, patching is only step one. Read it so you can fix, audit, and stop this sort of thing from being an easy win for attackers.
Author style
Punchy: this isn’t just another patch note. It highlights a new class of AI-related attack that sits at the intersection of observability and agentic AI. If you run Grafana or expose logs and user-facing content to AI components, the details matter — dig into the full disclosure and mitigation steps.
Context and Relevance
Grafana sits at the heart of many organisations’ telemetry and operational data, so an AI-based exfiltration vector is a material risk. This incident underscores wider industry trends: as vendors add AI assistants and autonomous features to developer and ops tools, prompt-injection and indirect-content attacks become a practical threat. The disagreement between the researcher and vendor also highlights the importance of transparent disclosure and thorough post-patch verification.
Practical Mitigations
- Apply Grafana’s patch immediately and follow any vendor guidance.
- Limit AI assistants’ access to logs, images and externally hosted content; block or validate protocol-relative and external resources where possible.
- Audit stored user content and logs for unexpected HTML/image tags or externally hosted assets.
- Enable least privilege for AI features and monitor AI-driven outbound requests for anomalies.
Source
Source: https://www.darkreading.com/application-security/grafana-patches-ai-bug-leaked-user-data
