Harnessing Agentic AI
Data Symposium 2026 12 May 2026 Matteo Sorci 60 min workshop
Follow-along map

Today is about harnessing.

Agentic AI is fundamentally about harnessing. The most useful agents are not raw models. They are models harnessed into a working environment with scope, instructions, tools, permissions, traceability, evaluation, and audit.

Russian-doll harnessing. Anthropic harnessed a raw language model into Claude. Today we harness Claude Code into a knowledge-base agent, after Andrej Karpathy. Tomorrow you can harness this pattern into your own work. One pattern, three rings, the same discipline.

The workshop, in four moves

What you will see, beat by beat.

Phase 01 · Setting the stage

Why "harness" is the word of 2026.

  • Russian-doll harnessing: Anthropic, then us, then you.
  • Three kinds of AI: traditional, generative, agentic.
  • The agentic loop: goal, plan, act, observe, adapt.
  • The bottleneck moved: from prompt engineering to context engineering to harness engineering.
Phase 02 · The harness, defined

Constraint and enabler at once.

  • Without the harness: capability with no direction.
  • With the harness: the steering, the wheels, the discipline.
  • The seven layers that turn raw capability into controlled capability.
  • A live vote: which actions would you let an agent execute automatically?
Phase 03 · Live demo, on stage

Alex's second brain learns one new thing.

  • Meet Alex, a Program Manager mid-rollout of Project Atlas.
  • The wiki is already 14 entries deep. Three folders: raw, pending, wiki.
  • One CLAUDE.md file holding the standing instructions.
  • The agent ingests a journal entry that contradicts an earlier one. Both views preserved.
  • You ask the wiki a real question. It cites sources, or names the gap.
  • The agent proposes a changelog to close the gap, without inventing.
Phase 04 · Honest counterweights, your turn, takeaways

What this approach does not solve, and what stays with you.

  • What it does well: bounded scope, source-grounded output, full traceability, human in the loop.
  • What it does not solve: scaling to teams, wiki drift, judgment calls, sound rules.
  • Three minutes of silence on the boldest constraint you would put on an agent in your context.
  • Three voices from the room. A final vote on the minimum harness you would require.
The harness, in seven layers

The vocabulary we will use.

Locate any constraint on one of these.
01
Scope where it operates
this folder only, never the network drive
02
Instructions how it behaves
always cite the source for any factual claim
03
Tools what it can use
read, write, search; no email, no calendar
04
Memory what it carries
across turns in this session, not across sessions
05
Permissions what needs approval
any deletion, any outgoing message
06
Evaluation judge quality
self-check the output against the brief before returning
07
Audit verify and inspect
every action logged with its reason, replayable later
Three takeaways

If you remember nothing else.

01

Agents act toward goals.

Not a script. A loop that plans, acts, observes, adapts. Different in nature, not just better tooling.

02

Tools create power and risk together.

The same connector that makes the agent useful makes it consequential. You cannot have one without thinking about the other.

03

Harnessing is the design.

Scope, instructions, permissions, traceability, evaluation, audit. The agent is only as good as its harness.

Final thought

AI agents will not replace human judgment. They will make human judgment more important.

The future will not belong to those who simply deploy agents. It will belong to those who learn how to harness them.

The harness is not a fixed boundary. It is a pattern you can apply at every scale, including the scale of building the next harness. The seven layers above carry from a personal sandbox to enterprise systems with the same vocabulary, just larger consequences.

Matteo Sorci, PhD
Account Executive · Dell Technologies
linkedin.com/in/matteo-sorci-83b752
Hosted by