How a CIO guides agentic AI with structured governance

How a CIO guides agentic AI with structured governance

Summary

This interview with Joe Locandro, CIO of Rimini Street, explains how the organisation is deploying agentic AI — autonomous agents that act across systems — and the governance needed to keep it safe and productive. Locandro describes internal projects such as a “Deep Research” agent that aggregates customer data across Salesforce, finance and ServiceNow, and agents that analyse ticket resolution trends. Rimini Street has adopted Microsoft Copilot widely but paired adoption with mandatory training, policies and a steering committee to assess legal, privacy and operational risk.

The article covers practical governance steps: a submission form and working-group review for any new AI tool, role-based access controls for different developer types, model monitoring for drift, and rules about where agentic AI is inappropriate (for example, poor-quality data or individual employee performance assessments).

Key Points

  • Agentic AI differs from generative AI by performing actions or processes across systems rather than just producing outputs.
  • Rimini Street uses agents for 360-degree customer research, ticket-analysis and productivity tools (eg Microsoft Copilot).
  • Governance sits at the centre: an AI steering committee plus a working group evaluate proposed tools for legal, HR and privacy risks.
  • All users must complete training before getting access; policies, cheat sheets and registers for model drift are enforced.
  • Access and development rights are tiered (IT developers, power users, general users) to limit risk as many agents are created.
  • CIOs must plan for production complexity (data normalisation, interfaces) and be proactive about governance and data quality.
  • Rimini Street has rejected tools when data quality, legal exposure or inappropriate use cases (eg individual performance assessment) posed risks.

Why should I read this?

Short and blunt: if you’re running IT for an organisation, this is the playbook you’ll want handy. It shows how to let agentic AI speed work up without letting agents run wild — forms, training, steering committees, access tiers and drift checks. No fluff, just the sort of controls that stop your next shiny pilot becoming a compliance headache.

Context and relevance

As organisations move from generative models to agents that act autonomously, the operational, legal and privacy stakes rise. This piece is timely because it shifts the conversation from theoretical risk to practical controls CIOs can implement now: governance committees, mandatory training, access controls and monitoring. For any organisation scaling AI beyond pilot projects, these are immediate priorities and align with broader trends in AI risk management and data governance.

Source

Source: https://www.techtarget.com/searchcio/feature/How-a-CIO-guides-agentic-AI-with-structured-governance