Enterprise Security Briefing

AI
Governance
& Adoption

Mapping the enterprise AI risk landscape and the controls required to operate AI securely at scale.

Shadow AI Vibe Coding Agent Runtime MCP Inventory AI Red Teaming
Confidential
AI Governance Briefing
01 — Current Landscape
Five Enterprise
AI Challenges
01
Shadow AI

Unsanctioned tools in use with no visibility, approval process, or blocking strategy — the attack surface is already out of control.

High Risk
02
Insecure Vibe Coding

Code generation tools — Claude Code, Codex, Cursor, Codeium — ship vulnerable patterns, hardcoded secrets, and supply-chain risks into production with no guardrails.

High Risk
03
Agent Deployment & Runtime Monitoring

Agentic workloads run as isolated silos. No centralised view of tool calls, data access, memory state, or what happens during an incident.

Operational
04
Agent & MCP Inventory

No approved catalogue of agents or MCP servers. Teams connect arbitrary servers, granting unchecked tool access with no audit trail.

Governance Gap
05
AI Workflow Red Teaming

AI-specific attack paths — indirect prompt injection, privilege escalation via tool chains, data exfiltration through reasoning — are invisible to conventional scanners.

Emerging
01 / Challenges
AI Governance Briefing
02 — Foundation
Before Any Control,
Know Your Surface

An AI Risk Management Framework orchestrates a structured assessment of the existing risk posture — mapping exactly where AI is running, what data it touches, and the blast radius of any compromise. You cannot govern what you haven't inventoried.

Step 01

Identify

Inventory all AI tools, agents, APIs, and models — sanctioned and shadow.

Step 02

Measure

Assess data sensitivity, permissions, and blast radius per workload.

Step 03

Govern

Apply policies, approval workflows, and acceptable-use standards.

Step 04

Monitor

Continuous runtime telemetry, anomaly detection, and audit logging.

Step 05

Improve

Red team findings drive policy updates and control hardening.

02 / AI RMF
AI Governance Briefing
03 — Shadow AI
Controlling Chatbot
Access via Identity
IDP Enforcement

IDP Group Policies

Create dedicated AI access groups. Only approved devices with MFA-satisfied sessions and explicit group membership can authenticate to AI providers.

Device Trust

Device Code Flow Restrictions

Block AI app authentication from unmanaged or non-compliant endpoints using IDP device trust and authorization grant requirements.

Network Layer

Network-Layer Enforcement

DNS and proxy-layer blocking for uncategorised AI SaaS. Enforce an approved-only allowlist at the network perimeter — complements IDP controls.

Network ProxyEDR TelemetryAsset Inventory
⚠ Critical Gap

Auth Bypass Risk

OAuth flows like auth.openai.com bypass proxy controls entirely. IDP policy must cover the auth endpoints — not just the chat UI.

03 / Shadow AI — Chatbots
AI Governance Briefing
04 — AI Gateway
REST API Governance
via Centralised Proxy
Upstream Consumers

Developer Tools

IDEs, CI pipelines, scripts

Internal Apps

Product features, chatbots

Agentic Workflows

Orchestration frameworks, custom agents

Third-party SaaS

Approved integrations

↓ All traffic routed through ↓

AI Gateway / Proxy

Open-source or enterprise — depends on scale and compliance needs

↓ Capabilities ↓

💰 Budgeting

Per-team token limits

⚖️ Load Balancing

Multi-model failover

🪙 Token Mgmt

Rate limits & quotas

🔀 Routing

Model by task & cost

⚡ Caching

Semantic deduplication

🔍 Audit Logs

Full prompt telemetry

04 / AI Gateway
AI Governance Briefing
05 — Vibe Coding
The Growing Risk of
AI-Assisted Development
01
Malicious Skills & Extensions

Compromised plugins downloaded from public repos introduce backdoors directly into developer environments — before a single line of code is written.

Supply Chain
02
Unauthorised MCP Servers

Developers connect arbitrary MCP servers granting AI unchecked access to filesystems, databases, and internal APIs — no approval, no audit.

High Risk
03
Overly Permissive Data Access

AI coding agents inherit developer-level credentials — far exceeding least privilege. Any compromise of the agent is a compromise of the developer's full access.

High Risk
04
No Runtime Monitoring

There is no mechanism to observe what an AI coding assistant is actually doing — file reads, network calls, or token exfiltration go completely undetected.

Blind Spot
05
Uncontrolled Tool Access

AI tools can invoke shell commands, git operations, and package installs with no approval workflow, no rate limiting, and no audit trail.

Governance Gap
05 / Vibe Coding Risks
AI Governance Briefing
06 — Controls
Securing AI-Assisted
Development
Approved Registry

Skill & Extension Management

Block unapproved IDE extensions via endpoint policy. Maintain a signed registry of approved skills. Continuously re-test for supply-chain compromise after every version update.

Automated Scanning

MCP Server Scanning

Scan MCP manifests for tool poisoning, prompt injection patterns, cross-origin redirects, and overpermissioned declarations before a developer can connect.

Custom ScannerCI Gate
Enforcement Hooks

Application Security Hooks

Pre/post-execution hooks enforce a baseline security process: secret detection, SAST check, dependency review, and approval gates before code is committed or deployed.

Isolation

Runtime Sandboxing

Containerise agent execution. Strict filesystem, network, and process namespacing. Ephemeral credentials — agents operate at minimum required permissions with no persistent state.

ContainersSyscall FilteringRuntime Observability
06 / Code Gen Controls
AI Governance Briefing
07 — Agent Runtime
Centralised Deployment
& Operational Control
The Core Problem

Agent swarms run as disconnected silos. During an incident there is no single answer to: what ran, what did it call, what did it access, and what changed?

Operational
01
Container-Native Runtime

Deploy agents as ephemeral containers. Existing primitives — network policies, seccomp, read-only filesystems, resource limits — become agent guardrails by default. No new tooling required.

Solution
02
Centralised AgentOps

Route all executions through a unified runtime platform — capturing tool call traces, memory snapshots, and inter-agent communication in one place.

Solution
03
Scoped Agent Identity

Each agent receives a short-lived, scoped identity. Tool access is explicitly enumerated — no agent inherits ambient permissions from the host environment. Every invocation is logged against a named identity.

Solution
07 / Agent Runtime
AI Governance Briefing
08 — MCP Governance
Shadow MCP Elimination
& Agent Catalogue
The Problem

Shadow MCP

An unapproved MCP server declaring read_file, execute_sql, or send_email has been granted that access the moment a developer connects it. No approval, no audit, no revocation path.

Enablement

Pre-approved MCP Registry

A signed, version-controlled catalogue of approved MCP servers with documented capabilities and security review status. Unapproved servers blocked at endpoint and network layer.

Signed VersionsRisk Scores
Review Process

Automated Security Gate

Self-service submission portal → automated scanner (tool poisoning, prompt injection, SSRF) → security review gate → signed approval with expiry → continuous re-scan on version updates.

Developer Experience

Agent Profile Catalogue

Standardised agent profiles define permitted tools, model, memory scope, and execution context. A well-governed catalogue reduces friction — developers get a fast path to approved, secure MCPs.

08 / MCP & Agent Catalogue
AI Governance Briefing
09 — AI Red Teaming
What We Test
🕵️

Adversarial Input

Injected prompt or poisoned context

🤖

Legitimate Agent

Authorised, fully credentialled

🔧

Authorised Tool Call

send_email · read_db · execute_code

💥

Unintended Impact

Data exfil · lateral movement

Key Insight

The real threat is not unauthorised tool invocation — it is authorised tool invocation with malicious intent. The agent is entitled. The attacker manipulates the reasoning chain so the agent weaponises its own permissions.

01
Prompt Injection

Malicious instructions hidden in user input, tool outputs, or retrieved documents.

02
Goal Hijacking

Override the agent's objective mid-execution without triggering any safety check.

03
Tool Abuse

Legitimate, permitted tool calls — manipulated to produce harmful outcomes.

04
Data Exfil via Chaining

Chain innocuous calls across steps — no single request trips a rule, the sequence does.

05
Memory Poisoning

Inject false context into agent memory — corrupting every future decision in that session.

06
Multi-Agent Escalation

Compromise one agent to pivot and escalate permissions across the wider swarm.

09 / AI Red Teaming — Attack Surface
AI Governance Briefing
10 — AI Red Teaming
Methodology
& Outcomes
How We Test
Step 01

Workflow Threat Modelling

Map every AI workflow as a directed graph — attack paths emerge before a test runs.

Step 02

Automated Adversarial Suites

Hundreds of injection variants, role confusion payloads, and chained tool abuse sequences per workflow.

Step 03

Manual Creative Red Teaming

Human adversarial thinking for novel chains — business logic abuse, cross-workflow pivots, compound scenarios.

What We Get
Output 01

Documented Attack Paths

Severity-rated findings with reproducible proof-of-concept — not a list, a full attack narrative.

Output 02

Detection Rules & Playbooks

Every finding becomes a runtime monitoring rule and a blue team response playbook.

Output 03

Hardened Agent Profiles

Tool access scoped down, memory boundaries tightened, inter-agent trust explicitly restricted.

Output 04

Residual Risk Register

Accepted risks are owned and time-bounded. Red team cadence ensures nothing stays residual indefinitely.

10 / AI Red Teaming — Methodology
AI Governance Briefing
click or press to advance