How a DSO Put 47 Practices on Voice AI in a Weekend
The brief version: a DSO managing 47 dental practices across three states provisioned all 47 voice agents over a weekend. By Monday morning, every practice had a live AI receptionist — customized with the right services, hours, insurance networks, and staff information for that specific location.
No one wrote 47 system prompts. No one manually entered data for 47 knowledge bases. No one had a setup call for each practice. One person ran a script on Friday afternoon, reviewed a status dashboard Saturday morning, and handled three practices that needed manual KB supplements because their websites were thin. Everything else was Workforce Wave.
Here's what that actually looked like technically, and why the infrastructure story is the interesting part.
The Setup: What "47 Practices" Actually Means
A DSO at this scale isn't a monolith. Each of the 47 practices has its own identity:
- Its own phone number (most have at least two — front desk and scheduling line)
- Its own hours (they range from 8-5 weekdays only up to 6-day practices with Saturday hours)
- Its own insurance network mix (some in-network with 8 carriers, some with 3, one that's fully fee-for-service)
- Its own service mix (12 practices offer oral surgery, 31 offer Invisalign, 4 are pediatric-only)
- Its own staff roster (ranging from 2 providers to 9)
- Its own voice preference (the DSO wanted consistency in tone but let each practice choose a voice style — warmer for pediatric locations, more clinical for specialty practices)
In a traditional voice AI deployment model, this is 47 separate projects. 47 system prompts. 47 KB setups. 47 platform logins (or a lot of manual data entry into a shared account). 47 rounds of testing. Probably 6–8 weeks of work for a team of 2–3, not a weekend.
What Actually Happened: The Provisioning Script
The DSO provided a CSV with 47 rows. Each row: practice name, website URL, primary phone number, and a few overrides (voice style preference, whether the practice is pediatric-only).
The provisioning script looped through the CSV and called POST /v2/agents for each row:
# One call per practice — Workforce Wave handles the rest
curl -X POST https://api.workforcewave.com/v2/agents \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-H "Idempotency-Key: dso-provision-practice-{practice_id}" \
-d '{
"payload": {
"name": "{practice_name} AI",
"platform": "elevenlabs",
"business_url": "{practice_website}",
"template_id": "dental_receptionist",
"phone_number": "{practice_phone}",
"partner_slug": "smilecare",
"voice_style": "{voice_preference}"
}
}'
The script ran in 6 minutes. All 47 operations were pending.
Workforce Wave ran the provisioning pipeline 47 times in parallel — crawling 47 websites, extracting 47 sets of business data, generating 47 system prompts and knowledge bases, configuring 47 ElevenLabs agents. By Saturday morning, 44 of the 47 were active. Three had provisioning notes indicating low crawl confidence (thin websites, mostly PDFs, minimal structured content) and had been provisioned with vertical-fallback prompts, flagged for manual KB supplement.
The White-Label Layer: "SmileCare AI," Not "Workforce Wave"
The DSO didn't want their practices running a Workforce Wave-branded AI. They wanted consistency with their own brand — SmileCare Dental Group's practices would have "SmileCare AI."
This is the partner branding layer in action.
When the DSO's account was set up, we configured a partner record with partner_slug: "smilecare". This controls:
- Agent voice identity — the agent introduces itself as "SmileCare AI" or the configured practice name, never Workforce Wave
- Dashboard branding — the DSO's staff managing agents sees a SmileCare-branded portal, not the WFW interface (configurable via the white-label admin panel)
- Webhook payloads — the
sourcefield in all webhook events reflects the partner slug, not WFW - API responses — the
agent.brandfield in API responses uses the partner configuration
The practices never see "powered by Workforce Wave" unless the DSO explicitly adds that attribution. The AI is theirs — WFW is the infrastructure underneath.
This matters for the DSO's value proposition internally. They're not reselling a vendor tool; they're deploying their own AI capability across their network. That's a different story to tell to their practice managers, and it's a different level of adoption confidence.
Fleet Management: One Dashboard, 47 Practices
Once all 47 agents were active, the DSO needed a way to manage them without 47 separate logins or 47 separate dashboard tabs.
The fleet view shows:
- All 47 agents with current status (active, maintenance, flagged)
- KB health score per agent (0–100, color-coded)
- Call volume for the trailing 7 days, with anomaly flags (a practice that normally receives 40 calls a week and suddenly drops to 8 gets a yellow flag — could be a phone issue, an agent problem, or just a slow week)
- Unreviewed KB staleness flags, aggregated across the fleet and drillable per practice
- Unreviewed prompt optimization suggestions, same aggregation
For a DSO operations manager, the Monday morning workflow is: open fleet dashboard, scan for red/yellow, handle flagged items. If everything's green, you're done in 5 minutes. If there are flags, you drill into the specific agent and handle the specific issue.
This is fundamentally different from the DSO managing 47 separate voice AI vendor relationships. They have one integration, one dashboard, one contract, and one team who understands the system.
Prompt Optimization Across the Fleet
Here's one of the less obvious benefits of fleet-scale deployment: what the AI learns at one practice benefits all 47.
When Workforce Wave Prompt Optimizer detects a pattern — say, callers at 12 practices consistently ask about pediatric sedation and the agent handles it awkwardly — the insight isn't siloed to those 12 practices. The DSO sees a fleet-level optimization suggestion: "Callers across 12 practices have asked about pediatric sedation in the last 30 days; agents resolved this successfully in only 41% of cases. Suggested addition to the pediatric template: [specific language]. Estimated improvement: 60%+ resolution rate based on similar pattern fixes at other deployments."
The DSO approves the change once. It applies to all 47 agents (or a subset they select). This is the leverage point of fleet-scale infrastructure that per-practice deployments can never access.
Individual practitioners learning from each other's call patterns, without anyone needing to manually review call recordings across 47 locations.
The A2A Angle: Specialty Referrals at Scale
Post 1.4 covered what happens when one AI calls another. For a DSO, this has a specific and immediately practical application: specialty referrals.
Many DSOs have a mix of general practices and specialty practices — oral surgery, periodontics, endodontics. When a general practice needs to refer a patient to a specialty location, there's typically a phone call involved: the referring office calls the specialty office to find an open appointment slot that works for the patient.
With dual-mode enabled across all 47 practices (which is automatic for WFW agents), a referral coordinator AI at the general practice can call the specialty practice's WFW agent in bot mode:
{
"request_type": "referral_appointment",
"patient_type": "returning",
"procedure": "wisdom_tooth_extraction",
"referring_practice": "agt_general_westside",
"preferred_timeframe": "within_2_weeks",
"patient_availability": ["morning", "thursday_friday"]
}
The specialty practice's agent responds with available slots. The referral coordinator books the one that works, confirms back to the patient, and triggers a confirmation webhook to both practices.
No phone tag. No coordination friction. The AI-to-AI coordination happens in seconds, and both sides get structured confirmation records.
For a DSO doing 200+ specialty referrals a month, this is significant. The coordination work that currently takes front desk time at both ends of the referral gets absorbed into the AI layer.
The Infrastructure Insight
The thing we came back to repeatedly while building WFW's fleet capabilities is this: the DSO didn't build anything proprietary here. They configured infrastructure.
They didn't develop a custom AI model. They didn't write prompt engineering logic. They didn't build a KB management system or a staleness detection pipeline or a dual-mode voice detection stack. They brought a business URL and a CSV file.
The moat isn't in the prompts. Any competitor can write a dental system prompt. The moat is in the infrastructure that makes a 47-practice deployment a weekend project instead of a 6-week one — and then keeps all 47 agents current, improving, and visible from a single dashboard indefinitely.
This is what "voice AI infrastructure platform" means in practice. Not a tool that helps you build one agent. A platform that makes building a hundred agents take the same operational lift as building one.
The DSO that deployed 47 practices over a weekend isn't a case study in AI capability. It's a case study in infrastructure leverage. They showed up with a list of URLs. Workforce Wave did the rest.
That's the end of Series 1: The Bot That Builds Bots. Next series coming in June — Series 2: The Call That Closes — on how voice AI changes the economics of inbound lead conversion.
Ready to put AI voice agents to work in your business?
Get a Live Demo — It's Free