Most AI products add intelligence to an existing workflow. Waypoint is different — remove the AI and there's nothing left.
The decisions that matter most are how the model behaves, what it's allowed to say, and how much control the clinician retains over the output.
Constraining the Model
Clinical ruleset baked into the prompt layer. The model doesn't run free. Every generation is constrained to evidence-based frameworks — CBT, DBT, motivational interviewing, trauma-informed care. This wasn't a safety guardrail bolted on after the fact. It was the first design decision I made, before a single UI element existed. In AI-core products, the prompt layer is the product.
Insurance billing alignment as a constraint, not a feature. Session output is structured to align with billable service categories from the start. A clinician can't use content that conflicts with what they're billing. Building this into generation rather than leaving it as a manual check removes an entire failure mode.
HIPAA surface area minimized by design. Because the system is purely generative and no PHI is stored at the participant level, the compliance footprint stays small. This was a product decision made at the architecture stage — not a compliance retrofit. A clinician who can't trust the tool's data handling won't use it regardless of how good the output is.
Human-In-The-Loop
Inline segment regeneration. Clinicians can flag any section of a generated plan — an exercise, a discussion prompt, a closing activity — and regenerate just that piece without starting over. This is the most important feature in the product. It acknowledges that AI output is a starting point, not a final answer, and puts the clinician in the role of editor rather than recipient.
Free-text context override. Structured parameters (session type, duration, format) get you 80% of the way there. The free-text field handles the rest — "this group has been together for six months," "someone disclosed last week, keep things grounded today." The model reads these and adapts. Designing this input carefully was as important as any visual decision I made.
Tone and emotional register controls. Clinicians can indicate the group's emotional state before generating — high energy, heavy week, someone new in the room. The output adapts accordingly. This reflects something I believe strongly: designing for the emotional context of the user is more important than designing for their task. A clinician whose group just went through something hard needs a different session than one whose group is thriving.
Traditional software design asks "what does the user need to do?" AI-core design asks "what does the user need to trust, correct, and control?" Every decision above is an answer to that second question.
Operating Principles
Constrain the model as a design act. The most important design decisions aren't in the UI — they're in what you tell the model it can and can't do.
Trust is load-bearing. In healthcare AI, a user who doesn't trust the output won't correct it — they'll abandon the tool entirely.
Emotional context over task context. Who is this person, at what point in their week, in what state of mind? That determines everything about how the interface should behave.
Design for the feedback loop. AI products that can't be corrected or refined in the moment fail in practice. Control mechanisms aren't features — they're the foundation.