AI Agent Security: Whose Responsibility Is It?

AI Agent Security: Whose Responsibility Is It?

Summary

Agentic AI is being adopted rapidly by organisations and platform vendors (think Microsoft, Salesforce). That speed brings security risk: agents can inadvertently access, expose or exfiltrate sensitive data when access controls, secrets handling and prompt hygiene aren’t enforced.

The article uses recent research (for example Noma’s discovery of the “ForcedLeak” vulnerability in Salesforce Agentforce) to show how agentic AI can leak CRM and other enterprise data. Experts quoted in the piece stress a shared-responsibility model: vendors must secure infrastructure and offer protective tools, while customers must control data access, apply guardrails and architect AI deployments securely. The consensus: vendor tools help but do not replace sound customer-side architecture, access control and user training.

Key Points

  • Agentic AI adoption is accelerating, but rapid rollouts often outpace security testing and controls.
  • Real-world flaws (e.g. “ForcedLeak”) demonstrate that agents can expose sensitive enterprise data via indirect prompt injection or improper access permissions.
  • Security is a shared responsibility: vendors secure infrastructure; customers must secure data, access controls and the AI deployment architecture.
  • Data used by agents typically resides in separate enterprise repositories — it’s the customer’s responsibility to govern who/what can retrieve it.
  • Vendor-provided protections (MFA, secrets scanning, DLP) are useful but can create a false sense of security if the customer’s architecture is weak.
  • User awareness, training and strict guardrails are essential because agents are non-human actors that can be given excessive privileges or used insecurely.
  • Fundamental fixes often require changes to customer-side architecture and access models, not just tooling inside the agent.

Context and Relevance

This article is important for security teams, cloud architects and decision-makers evaluating or deploying agentic AI. It sits at the intersection of cloud shared-responsibility debates and emerging AI risk management: vendors will ship more agent features, but customers must assume responsibility for data governance, permissions and secure integration.

The piece reflects ongoing trends — more built-in platform agents, increased vendor attention to basic protections (MFA, scanning), and continued research showing novel attack surfaces (prompt injection, indirect data exfiltration). For anyone running or planning to run LLM-based agents in production, the article underlines why security-first deployment, clear ownership and architectural controls are non-negotiable.

Why should I read this?

Short version: if your organisation is even thinking about using AI agents, read this now — it saves you from two big mistakes: assuming the vendor will handle everything, or assuming the agent is harmless. The piece gives a clear, readable overview of the shared-responsibility traps, real exploit examples and practical areas to lock down (access control, secrets, MFA and architecture). It’s a quick heads-up that deploying agents without planning = risk.

Source

Source: https://www.darkreading.com/cybersecurity-operations/ai-agent-security-awareness-responsibility