SGA Internal · VP of AI Strategy · Phase 1 Rollout

The AI Operating System for SGAPhase 1 — the plumbing, the pad, the flywheel.

A foundation that turns data into actions, actions into outcomes, and outcomes into a self-improving loop — across every Office Manager, every Regional Director, and every executive report in the network.
7 Layers 3 Audiences NIST RMF Aligned IPO / S-1 Ready
For: Ron · Karen · Myles
Prepared by Scott Guest · April 27, 2026
Phase 1 of the multi-phase VP of AI rollout
02 · The Thesis
Why now · The 3-year window

We are in the era of the specifier — and dental hasn't woken up yet.

Vibe-coded software went from 25% complete a year ago to 95% complete today. The remaining 5% is the hard part — security, scalability, compliance, governance — and it's roughly a three-year window before AI eats the traditional software stack. The DSO that builds a safe, HIPAA / PCI / SOX-isolated AI environment first owns the moat.
Window
~3 yrs
Before the last 5% is solved and the moat closes
Dental Tech Maturity
Behind
Years behind medical · medical 2 yrs behind every other industry
SGA Position
Most tech-forward DSO
Vendor consensus · Denticon-anchored API leverage
What Wins
The plumbing
Compliance, isolation, governance — not another report
The bet
Stop chasing point solutions. Build the environment — secure, governed, instrumented — and the agents become a factory we can keep extending forever.
Why this wins for SGA
SGA is uniquely positioned: most tech-forward DSO, Denticon API-anchored, executive support (Ron / Karen / Myles), and a real proof point already shipping (Daybreak).
SGA · VP of AI Strategy
02 / 14
03 · Three Audiences
One engine · Three operating altitudes

Every layer of the org gets a different view of the same flywheel.

The same underlying engine produces a daily action list at the practice, a coaching scorecard at the region, and a rolled-up momentum view at the top.
Layer 1 · Operating
Office Managers
~260 OMs across the network
What they get
  • Daybreak briefing — what to do today, ranked by impact
  • Patient task lists — confirmations, reactivation, follow-up
  • Same-day feedback — did it move the needle?
  • Agents handle the busywork — texting, scheduling, RCM nudges
Daybreak live Feedback loop in build
Layer 2 · Coaching
Regional Ops Directors
Multi-practice oversight · spans 8–15 offices
What they get
  • Regional scorecard — practice-by-practice momentum
  • Compliance view — who executed, who didn't
  • Coaching prompts — what to talk about with which OM
  • Exception alerts — only the practices that need them this week
ROD dashboard in build RIS scorecard adjacent
Layer 3 · Steering
Executive Team
Karen · Myles · the board · Ron's IT org
What they get
  • S-Curve roll-up — where every practice sits on the curve
  • Network momentum — what's accelerating, what's stuck
  • Governance dashboard — NIST RMF posture, audit-ready
  • Strategic agent ROI — hours reclaimed, decisions accelerated
S-Curve v1 live Network roll-up in build
SGA · VP of AI Strategy
03 / 14
04 · The Architecture
★ The hero · 7 layers, 3 output rails, 1 roll-up

The full stack — built once, extended forever.

Foundation up: concept → data → security → orchestration → feedback → self-improvement → automation. Each layer is a deliverable. Each layer unlocks the next.
Live Build Next
↑ Inputs
Power BI Live
Source of truth · 91% validated
Denticon API Live
Modern API · all major workflows
Dental Intel Live
85 metrics · interim bridge
NexHealth Live
Booking · patient comms
Sikka ONE Build
Cross-PMS aggregation
PMS direct Build
Eaglesoft · Dentrix · Open Dental
SGA AI OS · The Stack
↑ S-Curve Roll-Up
Network-wide momentum view
7
Remedial Automation
Agents pick up repeatable tasks — confirmations, reactivation, RCM follow-ups, outbound campaigns.
6
Self-Improvement Loop
The algorithm amplifies what works, suppresses what doesn't, flags non-compliance.
5
Feedback Loop
Did they execute? Did it move the metric? The instrumentation that makes everything else work.
In Build
4
SGA AI OS
Workflow orchestration · agent runtime · scheduling · routing · audit logging.
In Build
3
Security & Infrastructure
HIPAA · PCI · SOX-isolated environment · hardware infra · BAAs (Anthropic et al.) · data egress controls.
In Build
2
Data Validation
Source of truth · validated metric library · the "two-clock" gate for every AI report.
In Build
1
Concept & Vision
Common strategy · workflows · governance philosophy · NIST RMF mapping.
In Build
FOUNDATION · DSO operating context · 260 practices · Multi-PMS reality
Outputs ↓
Executive Layer
S-Curve roll-up
Network-wide momentum · governance posture · agent ROI
ROD Layer
Regional scorecard
Practice-by-practice compliance · coaching prompts · exception alerts
OM Layer
Daybreak briefing
Daily ranked actions · patient task lists · same-day feedback
The bolt-on premise
Once Layers 1–5 exist, every new agent is a plug-in, not a project.
SGA · VP of AI Strategy
04 / 14
05 · Layer 2 · Data Validation
Layer 2 · The Two-Clock Problem

If we don't know which 75% is right, we don't know anything.

Today users are making decisions on unvalidated Power BI data. Myles uses numbers "directionally." Karen gets calls she can't reconcile. Until we close this gap, every downstream layer compounds the error.
The risk we're carrying today
~75% of metrics validated, ~75% of those accurate — but no one knows which 75% is the right 75%. Decisions are being made on the unknown delta.
The two-clock principle
Two clocks in a house only work if both connect to the same satellite. Power BI becomes the satellite. Every AI-generated report validates against it before it ships.
Validated metric library
A growing catalog of queries that have been built, run, validated manually, and certified. Each cert turns on one more "always-on" metric AI can use.
Bridge while we close gaps
Dental Intel as interim bridge for treatment-acceptance and case-acceptance metrics until the Power BI integration covers them — saves the $11K/month after May.
June 8 readiness checklist
What "validated" must mean before we expand to two more clinics.
Production · Collections · Adjustments — signed off by ops
?
Scheduled appointments — Karen / Myles must validate before Myles uses "directionally"
?
Treatment plan data — migration in flight · target end of May
AI-report validation layer — not built · this is the new layer Phase 1 ships
Comparison to Gen4 BI baseline — possible per location, in progress
Hard rule
No AI report ships to a decision-maker without passing the validation gate. "Directional" is not a green light.
Phase 1 deliverable
A validation layer service every AI agent calls before it returns a number — same way every web request goes through auth.
SGA · VP of AI Strategy
05 / 14
06 · Layer 3 · Security & Infrastructure
Layer 3 · The compliant moat

Build the environment that keeps the data in and the wrong data out.

This is the layer that separates "vibe-coded prototype" from "DSO-ready system." HIPAA / PCI / SOX-isolated. Hardware infrastructure designed for it. Vendor BAAs executed before any PHI touches the system. This is what the rest of dental hasn't built yet — and what every senior leader will pay for to reduce their personal risk.
Data isolation by default
AI workloads run in an environment where the data physically cannot escape — egress controls, model gateways, no direct vendor passthroughs. The wrong data can't leak even if a prompt asks for it.
Hardware infrastructure (TBD)
The hardware footprint is still being scoped — on-prem vs. dedicated cloud tenancy vs. hybrid. This is the next architectural decision and a Phase 1 unblock.
Vendor BAAs — Anthropic and beyond
Every AI provider that touches PHI signs a BAA before integration. Anthropic is the priority (primary model provider). Each new vendor goes through the same gate.
Phase A / Phase B split
Phase A (today) is PHI-free — practice profiles, staff data, aggregate metrics, brand assets. Phase B unlocks after the BAAs and infrastructure are in place. ~71% of the value is reachable in Phase A.
Compliance posture
Three frameworks · one isolated environment.
HIPAA — PHI handling · BAAs · audit logs · access controls
PCI-DSS — payment data isolation · scope reduction · tokenization
SOX — financial controls · change management · IPO/S-1 readiness
?
Hardware footprint — open question · needs scoping
?
Anthropic BAA — to be initiated · prerequisite to PHI work
NIST RMF mapping — governance overlay (see slide 12)
The moat
Most of dental hasn't realized this is something they need yet. By the time they do, SGA can be the one with a fully baked solution.
The opportunity beyond SGA
Once it's built, this environment is itself a product — hosting, consulting, the "vibe-coder evaluator" role for the next three years. There is real business here.
SGA · VP of AI Strategy
06 / 14
07 · Layer 4 · SGA AI OS
Layer 4 · The orchestration engine

SGA AI OS is the operating system every agent runs on.

A workflow runtime that schedules agents, routes tasks, manages context, logs every action, and gives us one control plane to govern every AI worker in the org. Once it exists, building a new agent is configuration — not engineering.
Agent runtime + scheduler
Run agents on cron, on event, on demand. Daybreak runs at 6am. ROD scorecards run every Monday. Outbound RCM agents run when a balance ages past 30 days.
Routing & tool access
Each agent gets only the tools and data it needs. PHI-aware routing: an agent without Phase B clearance simply can't reach PHI tables.
Context & memory
Shared memory across agents — what's been said, what's been done, what's been promised. No agent starts from zero.
Audit log of everything
Every prompt, every tool call, every output is logged. This is what makes governance possible at all — and what powers the feedback loop on the next slide.
Why this matters strategically
The difference between a project and a platform.
One platform, not 10 standalone integrations
One audit trail, not 10 vendor logs
One governance surface, not 10 review meetings
Configuration to add an agent, not engineering sprints
Common identity / access, not per-tool ACLs
Vendor-agnostic — swap models without rewriting agents
The plumbing & pad
SGA AI OS is the foundation that lets us bolt on any agent in days, not months. Daybreak, ROD scorecards, outbound RCM, reactivation — all the same engine.
Build path
Phase 1 ships a working orchestration layer with the audit log lit. We may borrow open-source orchestration tooling under the hood — what matters is the SGA-controlled control plane.
SGA · VP of AI Strategy
07 / 14
08 · Layer 5 · Feedback Loop · Mechanics
★ Layer 5 · The flywheel · how it works

The most important layer we will build.

Every other layer pays off only if this one exists. The feedback loop is what turns a daily briefing into a self-correcting machine — and what turns SGA's operations data into something genuinely proprietary.
STEP 1 INSTRUCT STEP 2 ACT STEP 3 MEASURE STEP 4 LEARN THE CORE Algorithm amplifies what works
STEP 1 · INSTRUCT
The system tells the OM what to do today
Daybreak ranks actions by impact. Each action is specific, measurable, and tied to a metric the engine knows how to track.
STEP 2 · ACT
The OM (or an agent) executes
Some actions are human (case presentation, hygiene huddle). Others get handed to agents (text confirmations, reactivation outreach).
STEP 3 · MEASURE
Did the action happen, and did it move the metric?
The engine watches the underlying data. We see execution (was the call made?) and outcome (did production move?) in the same place.
STEP 4 · LEARN
Amplify what works · suppress what doesn't · flag non-compliance
The algorithm boosts actions that produced lift, deprioritizes ones that didn't, and surfaces the OMs who never executed at all.
SGA · VP of AI Strategy
08 / 14
09 · Layer 5 · Feedback Loop · Value
★ Why this layer is the gift that keeps giving

Two compounding payoffs — and they get more valuable every day they run.

The feedback loop produces something rarer than any single agent: a proprietary dataset linking specific actions to specific outcomes across hundreds of practices — and a real-time transparency layer over how the org actually executes.
Payoff 1
A proprietary action → outcome dataset.
Every instruction · every execution · every measurable result is captured. Over time this becomes the most valuable asset the company has — and the only one no competitor can replicate.
Practices
260
Daily actions
~5K+
Per year
~1.8M
Compounds
Yearly
What it unlocks
Predictive playbooks — "practices like yours that did X next saw Y"
M&A integration — drop a new acquisition into the curve immediately
Vendor leverage — go to PMS / DI / Zernio with our data, not theirs
Strategic IP — defensible, board-ready evidence of what drives DSO performance
Payoff 2
Real-time monitoring & transparency.
Today we hand out reports and hope. The feedback loop tells us — by name, by practice, by region — who executed, who didn't, and what the outcome was. That changes how we run the company.
Visibility
Per-OM
Latency
Same-day
Coverage
Network
Audit
Continuous
What it changes
RODs coach on what actually happened — not what was supposed to happen
Karen sees compliance across the network without phone calls
Myles sees momentum — what's accelerating, what's stuck, with evidence
Governance becomes audit-ready — every action traceable to outcome (NIST · S-1)
The strategic point
If we ship nothing else from Phase 1 except this layer, SGA still wins — because every other agent we ever build plugs into it and gets smarter for free.
The cultural point
This is the layer that ends "I think we're doing X" and starts "the data shows we did X, and here's what happened next." Transparency at every altitude.
SGA · VP of AI Strategy
09 / 14
10 · Layers 6 + 7
Layers 6 & 7 · The compounding output

Once the loop is running, the system gets better — and starts taking work off humans.

These two layers are the dividend Phase 1 pays for years. Self-Improvement makes the engine smarter every day. Remedial Automation routes the now-obvious tasks to agents instead of OMs.
Layer 6
Self-Improvement Loop
The algorithm sitting in the middle of the flywheel. It's not a person — it's a continuous learning system that watches the data and adjusts what each OM gets next.
What it does
Amplifies what works — actions that produced measurable lift get prioritized network-wide
Suppresses what doesn't — actions that didn't move the metric get demoted
Personalizes — a high-performing OM gets different prompts than a struggling one
Flags non-compliance — surfaces the people who never engage at all
Compounds — every day's data makes tomorrow's instructions sharper
Layer 7
Remedial Task Automation
When the loop has identified a task as repetitive, low-judgment, and high-volume — an agent picks it up. The OM stops doing it. Production keeps running.
First wave of agent-handled work
Outbound confirmations — Neurality outbound agent · already in build
Reactivation campaigns — agent calls DI / Modento with patient list
RCM follow-ups — aging balance nudges · text + email
Pre-appointment screening — forms, history, intake
Internal reporting — ad-hoc Karen / Myles requests handled by AI
The progression
Today: ROD tells the OM what to do. Tomorrow: the system tells the OM what to do. Next year: the system does it for the OM where it can — and the OM only does what requires human judgment.
Where this leads
"Do we still need as many ROD-side observers?" becomes a real question. The work doesn't go away — it gets reshaped around what only humans can do.
SGA · VP of AI Strategy
10 / 14
11 · The S-Curve Roll-Up
The executive view · where every practice sits

The whole network on one curve — momentum, not just snapshot.

Every OM, every practice, every region plots onto the same S-Curve. We can see who's stuck, who's accelerating, who's plateaued, and who's elite — and the engine knows what to push to each one.
Network S-Curve · momentum view
Each dot = one practice · X-axis = AI engagement maturity · Y-axis = performance lift
LAGGARDS EARLY FAST FOLLOWERS ELITE AI ENGAGEMENT MATURITY → PERFORMANCE LIFT →
LAGGARDS
Not yet engaging
No Daybreak action, no execution data. Engine flags for ROD coaching intervention.
EARLY
Inconsistent execution
Engaging some days, missing others. Engine personalizes prompts to build the habit.
FAST FOLLOWERS
Consistent & gaining
Acting daily, lift visible in metrics. Engine adds higher-leverage actions.
ELITE
Compounding outliers
Producing the playbook the rest of the network learns from. Become the training set.
Why a curve, not a list
A list shows where you are. A curve shows where you're going. Momentum is the actual leading indicator.
SGA · VP of AI Strategy
11 / 14
12 · Governance · NIST RMF
The governance overlay · NIST AI Risk Management Framework

Governance is not a slide deck — it's the architecture mapped to a recognized framework.

Every layer in this stack maps to one of NIST RMF's four functions. This is what makes the system audit-ready, IPO / S-1 defensible, and what we'll embed into the VP of AI job description as the operating standard.
Function 1
Govern
Org-wide policy, roles, accountability for AI systems and their outputs.
  • VP of AI as accountable owner
  • SharePoint strategy site · single source of truth
  • Policy library · vendor BAAs, data classification, escalation
  • Board reporting cadence · governance dashboard
Function 2
Map
Inventory every AI system, its purpose, data inputs, and risk profile.
  • Agent registry · every agent in the AI OS catalogued
  • Data flow maps · what touches what, what's PHI vs not
  • Risk classification · per agent and per workflow
  • Phase A / Phase B tagging · deployment gates
Function 3
Measure
Continuous monitoring of accuracy, drift, bias, and outcomes.
  • Audit log · every prompt and output captured
  • Validation gate · output checked vs. source of truth
  • Outcome telemetry · feedback-loop data feeds the metric
  • Drift detection · alerts when model behavior shifts
Function 4
Manage
Respond, remediate, retire — the full lifecycle of AI risk.
  • Incident response runbooks · per agent class
  • Kill-switch · centrally disable any agent
  • Continuous improvement · feedback loop drives revision
  • Retirement criteria · sunset agents that don't earn keep
Why NIST RMF specifically
Recognized federal framework, aligned with HIPAA / SOX expectations, and the language auditors and S-1 readers already speak. We don't need to invent governance — we need to map to a standard.
Phase 1 deliverable
NIST RMF mapping document and the VP of AI job description that embeds this as the operating standard. Both ship in the SharePoint AI strategy site Ron is sponsoring.
SGA · VP of AI Strategy
12 / 14
13 · Outcomes & Roadmap
Phase 1 outcomes & the 30/60/90

The plumbing pays for itself before the agents start bolting on.

Decision velocity
10×
From ad-hoc reports to validated AI reports in minutes
Hours reclaimed / wk
~40+
From the 40-hour ad-hoc report cycle Scott runs today
OMs covered
260
Daybreak briefings, then ROD scorecards, then network roll-up
Network savings
$11K/mo
Practice analytics retire after May once Phase 1 ships
Days 0 → 30
Foundation hardens
  • Strategy on paper · SharePoint AI strategy site stood up
  • VP of AI JD with NIST RMF embedded
  • Validation layer v1 · gate every AI report against PBI
  • Anthropic BAA initiated
  • Hardware infrastructure scoping decision
Days 30 → 60
Loop lights up
  • SGA AI OS orchestration layer · audit log live
  • Daybreak feedback loop · execution + outcome captured
  • ROD dashboard pilot with one region
  • Treatment plan data migration done · DI bridge retires
  • June 8 expansion green-lit on validated data only
Days 60 → 90
Bolt-ons start
  • Self-improvement loop v1 · algorithm tuning what each OM gets
  • S-Curve roll-up for Karen / Myles
  • First remedial agent · outbound confirmations or RCM follow-up
  • NIST RMF posture report · governance dashboard v1
  • Phase 2 charter drafted on PHI / BAA-gated work
The framing for Myles
This is a 90-day investment that produces the foundation for every AI capability we ever ship. The agents on top are the dividend — and they keep coming.
The framing for Karen
OMs get a clearer day. RODs coach with evidence. Karen stops fielding reconciliation calls. The work doesn't change — the noise does.
SGA · VP of AI Strategy
13 / 14
14 · The Asks
What we need from each of you to ship Phase 1

Three sponsors. Three different unblocks. One foundation.

Phase 1 needs alignment from each of you on a different lever. None of this is a budget ask — it's a permission ask, a sponsorship ask, and a trust ask.
Sponsor 1 · Security & Compliance
Ron
CIO · IT org · governance owner
Approve the SharePoint AI strategy site
Single source of truth for strategy, workflows, NIST RMF mapping. Already discussed.
Sponsor the hardware infrastructure scoping
On-prem / hybrid / dedicated cloud — needs a Phase 1 decision.
Initiate the Anthropic BAA
Prerequisite to any Phase B PHI work. Drives the broader BAA template.
Co-author the VP of AI JD
NIST RMF embedded · governance posture as part of the role definition.
Sponsor 2 · Operations
Karen
Ops leadership · OM & ROD chain · Scott's executive sponsor
Pilot region for ROD dashboard
Pick one region for the 30–60 day pilot. Feedback loop runs against this group first.
Backstop on data validation pushback
When users want unvalidated data "directionally," Ops leadership holds the line.
Sponsor the Daybreak rollout cadence
Approve the OM rollout sequence. Endorsement makes adoption an expectation, not an option.
Champion transparency as the standard
The feedback loop's value depends on Ops embracing visibility instead of resisting it.
Sponsor 3 · Executive Trust
Myles
Executive · the natural skeptic · the data buyer
Agree on validated-data-only for decisions
"Directional" stops at the validation gate. Validated data is what reports against.
Pilot agreement for the feedback loop
Treat the first region as the pilot. Watch the data. Decide on expansion based on evidence.
Approve June 8 expansion gate
Two more clinics stand up only if BI data is validated and signed off. Not before.
Help define the executive S-Curve view
What does Myles want to see weekly? That shapes the exec layer of the roll-up.
The closing thought
Build the plumbing & pad once. Bolt on agents forever. Phase 1 is the gift that keeps giving — and the gift that produces more gifts.
Next step
Ron / Karen / Myles working session · 60 min · agree the three asks · greenlight the 30/60/90 roadmap.
SGA · VP of AI Strategy
14 / 14