Highlight Report

Pair agentic AI with human context at the core to fix your insurance operations

Insurers can’t afford to sit out the agentic shift. Inaction locks them into human-centric operating models that are both inefficient and expensive, especially as carriers contend with sprawling systems, rising cost per policy, tightening capital rules, and relentless compliance demands. Add a shrinking talent pool and mounting regulatory pressure, and legacy models are starting to crack.

Agentic AI gives insurers a fresh opportunity to redesign insurance operations by placing digital labor at the core. However, this only works when agents are built with real operational context. Goals, accountability, decision rights, and domain judgment shape how work gets done and how outcomes are owned within a carrier. Sutherland’s AI Hub aims to address this by blending role-based operating logic with AI agents to redesign and modernize insurance operations.

Agents without human context are just rebadged automation

Most insurers treat agents as a technology experiment rather than a part of their new operating model. These agents are built to handle repetitive tasks such as navigating systems, responding to queries, and interacting with customers, but they can never solve real business problems. If an agent can complete a task but not own the outcome, it isn’t agentic but just traditional automation in a new bottle. The result is a proof-of-concept graveyard that fails to move the needle on loss ratios, policy costs, and customer experience.

The missing ingredient is how human employees work with a defined purpose, goals, and boundaries, which helps them plan, make informed decisions, and execute tasks effectively. Unless insurers inject that into agents, humans will always remain within their organization’s real control layer. Embedding that context is enough of a challenge, which is where Sutherland’s AI Hub comes in.

Treat your agents as digital counterparts of the workforce embedded with human context

Sutherland recognized that most agents stumble because they’re not trained on the realities of human-based roles. Its AI Hub, launched in 2025 and a culmination of more than a decade of product and technology innovation, aims to change that by harnessing its deep domain expertise to codify best practices into role-based agents that mirror how work truly happens within a carrier. Each persona is mapped to a real operational job (see Exhibit 1).

Exhibit 1: Insurers must start managing digital talent like human talent

Source: Sutherland, 2026

These roles aren’t like isolated bots chasing ROI metrics. They operate as a stitched-together virtual team with real role-level context, carrying a goal from intake to outcome. It’s not AI imitating people, but AI working the way people do and with people in a collaborative, contextual, and purposeful way. For insurers, this will translate into faster cycle times and reduced cost per policy among other benefits.

A governed tech stack is the difference between production and agentic slop

Role-based operating context alone isn’t enough. Insurers need a governed tech stack to avoid poorly connected agents with unclear authority, limited observability, and a lack of trust, what we call “agentic slop.” Ultimately, it’s the technology underneath that makes agents work. To eliminate that, Sutherland has integrated three layers (explained below) into the AI Hub, bringing accountability, clarity, and intelligence to every agent.

  1. Application layer (role-based personas): Mirrors how insurance operations truly work through domain-specific digital roles such as the Extractor, the Inspector, and the Conductor. Augmented by process intelligence tools such as SKAN AI, it decodes tribal knowledge within carriers and converts it into structured, automatable workflows.
  2. Model layer (intelligence engine): Built on large language models combined with automation and AI platforms spanning the front, middle, and back offices, including an arsenal of Sutherland’s IP of complementing technologies (robotic process automation, optical character recognition, intelligent document processing, customer experience, and low-code/no-code). This layer drives reasoning, prediction, and execution, serving as the intelligence engine for agent performance.
  3. Orchestration layer (control and trust): Synchronizes multi-agent workflows and connects to domain partners such as Federato, Five Sigma, 913.ai, and Solvrays. Supported by a trust, test, and knowledge center, this layer ensures governance, observability, and guardrails that make agentic AI production-grade.

The caveat is that most insurers are still working through their foundational challenges. Core system fragmentation, inconsistent data pipelines, and unclear process ownership remain widespread. Agentic AI doesn’t bypass these weaknesses but exposes them. When underlying systems, data flows, operating controls, and context are poorly defined, multi-platform, multi-partner agent environments quickly become heavy and brittle. Fixing the foundation isn’t a box already checked, but a parallel journey of deliberate context engineering. This is where the hardest work and the highest value creation still sit.

Design role-based context into the operating model first, then scale agentic AI

Sutherland’s role-based agents are already in action. A US-based insurer, for instance, is using them to manage daily benefits enrollment cases that require data sufficiency and data accuracy checks. Another carrier operating a scaled multi-agent setup has five agents working alongside a supervisor agent that sets priorities and orchestrates the order of task execution within defined guardrails.

Such implementations illustrate how you can build a role-based agent workforce:

  1. Start with roles, not tasks: Design agents around real operational roles (for e.g., claims examiner, benefits analyst, underwriter), not isolated activities. Context comes from owning an outcome, not completing a step.
  2. Codify tribal knowledge before automating:Capture how experienced employees actually make decisions (exceptions, judgment calls, handoffs, and escalation logic) before translating work into agent logic.
  3. Define authority, boundaries, and accountability: Every agent must have clear decision rights, guardrails, and escalation paths, mirroring how humans operate within governance, compliance, and risk thresholds.
  4. Ground agents in live operational signals: Agents must operate with real-time access to data sufficiency, accuracy checks, policy rules, and downstream impacts. Without this, they remain blind executors.
  5. Treat agents as workforce members, not IT assets: Measure them the way you measure people in terms of cycle time, quality, exception rates, and customer outcomes, not just throughput or automation ROI.
The Bottom Line: Insurers must treat agents like real members of their workforce to shift their operating model.

Redesigning the operating model with digital labor at the core is imperative to stay competitive. Those that treat agents like a real member of their workforce have a real chance to redesign their operating models for the AI era. Those that don’t will just add more layers of human and technology complexity. Eventually, success hinges on choosing the right partner with deep domain expertise in providing real human context to agents.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Download Research

    Sign In

    Insight. Inspiration. Impact.

    Register now for immediate access of HFS' research, data and forward looking trends.

    Get Started

      Contact Ask HFS AI Support