Waypoint

2026

I watched my husband burn out preparing group therapy sessions from scratch every week, so I built a tool that should have already existed. Waypoint is a solution to fill the pre-session content gap in behavioral health, sitting alongside the EHRs clinicians already use rather than replacing them. I'm iterating on this daily — this case study serves more as a snapshot of the first few days of build. Check out the live site for the latest interactions and enhancements.

Waypoint hero
48h Zero to deployed
Limited Prior CLI experience
100% Self-directed
Live In active weekly use

Role

Waypoint is an independently conceived and built product. In the span of a few days, I went from limited familiarity with modern AI-assisted development workflows to designing, building, and deploying a functional system that is now in active, ongoing use.

This project sits at the intersection of product design, agentic systems, and rapid prototyping. I led the work end-to-end: identifying the opportunity, conducting user research, defining the product direction, designing the experience, building the system, and iterating based on real-world usage.

Context

Behavioral health clinicians running groups have a structural problem most people don't know about: participants cycle in and out on a non-cohort basis. There's no "semester."

A participant could be in their first week or their eighth, sitting in the same room as someone who has heard a topic or lesson before. Which means that clinicians are pressured to generate original, clinically sound session material every single week.

"I spend more time building the session than running it."

— Research participant, substance use recovery clinician

I heard versions of this constantly from my husband and his colleagues. The hidden cost is time, and the mental load that compounds over a week when you're already holding space for people in crisis.

Definition

Before touching any tooling, I interviewed several working clinicians — both my husband and colleagues of his — to map the real shape of the problem. A few things I learned that refined the product direction:

  • 1 Clinicians want speed, but they also want nuance. A quick dropdown builder isn't enough — they needed a free-text input option to add context the dropdowns couldn't capture.
  • 2 Mixed-stage groups are the hardest to plan for. No existing tool addresses this. Content has to work for someone in week one and week thirty simultaneously.
  • 3 Mobile-first isn't a nice-to-have. Clinicians run groups with a phone or tablet in hand. Desktop-only tools get abandoned at the door.
  • 4 Print matters. Participant handouts are still physical. PDF export is a core workflow, not an edge case.

Competitive Landscape

Behavioral health EHRs like Kipu Health and Ease Health are established and they include group therapy documentation, attendance tracking, group note workflows, and group billing logic, along with ASAM criteria and level-of-care tools built into clinical workflows. But none of them generate group curriculum or session content. They manage what happened after the fact, not what to run.

Capability EHRs AI note tools Static libraries AI planners Waypoint
Generate group session content Partial Partial
SUD / addiction specific Partial Partial
Insurance billing alignment Partial
Mixed-stage group handling
Co-facilitation / participant leadership
Content variation engine
Mobile-first Partial Partial

AI documentation and note generation tools also exist, but they're reactive. They only process what happened in a session once it is underway. None of them help a clinician plan or generate what to run in the first place.

And while there are static curriculum libraries — like TheraPlatform's optional Wiley Treatment Planners which gives therapists access to over 1,000 evidence-based statements for goals, objectives, interventions, and homework — the content is static, not generated. There's no variation engine, no co-facilitation layer, and no adaptation for mixed-stage groups.

Prompt Engineering = Design

Before I wrote a line of code, I wrote a document. A long one. It laid out the user problems, the clinical constraints, the output structure I wanted, the parameters the system needed to accept, and the guardrails the model had to operate within. And it functioned as a form of prompt engineering: designing the system's behavior by designing its instructions.

Writing that brief clearly and precisely was product work. The quality of the output was directly proportional to the quality of the thinking I put in before touching any tooling. Prompt construction is a design skill. This project made that viscerally clear.

From the brief, I wireframed the core flows in Figma: a quick builder with structured parameters, a free-text override for nuance, a session output view, a library, and an authenticated clinician dashboard.

Waypoint wireframes

AI-Core Product Design

Most AI products add intelligence to an existing workflow. Waypoint is different — remove the AI and there's nothing left.

The decisions that matter most are how the model behaves, what it's allowed to say, and how much control the clinician retains over the output.

Constraining the Model

Clinical ruleset baked into the prompt layer. The model doesn't run free. Every generation is constrained to evidence-based frameworks — CBT, DBT, motivational interviewing, trauma-informed care. This wasn't a safety guardrail bolted on after the fact. It was the first design decision I made, before a single UI element existed. In AI-core products, the prompt layer is the product.

Insurance billing alignment as a constraint, not a feature. Session output is structured to align with billable service categories from the start. A clinician can't use content that conflicts with what they're billing. Building this into generation rather than leaving it as a manual check removes an entire failure mode.

HIPAA surface area minimized by design. Because the system is purely generative and no PHI is stored at the participant level, the compliance footprint stays small. This was a product decision made at the architecture stage — not a compliance retrofit. A clinician who can't trust the tool's data handling won't use it regardless of how good the output is.

Human-In-The-Loop

Inline segment regeneration. Clinicians can flag any section of a generated plan — an exercise, a discussion prompt, a closing activity — and regenerate just that piece without starting over. This is the most important feature in the product. It acknowledges that AI output is a starting point, not a final answer, and puts the clinician in the role of editor rather than recipient.

Free-text context override. Structured parameters (session type, duration, format) get you 80% of the way there. The free-text field handles the rest — "this group has been together for six months," "someone disclosed last week, keep things grounded today." The model reads these and adapts. Designing this input carefully was as important as any visual decision I made.

Tone and emotional register controls. Clinicians can indicate the group's emotional state before generating — high energy, heavy week, someone new in the room. The output adapts accordingly. This reflects something I believe strongly: designing for the emotional context of the user is more important than designing for their task. A clinician whose group just went through something hard needs a different session than one whose group is thriving.

Traditional software design asks "what does the user need to do?" AI-core design asks "what does the user need to trust, correct, and control?" Every decision above is an answer to that second question.

Operating Principles

Constrain the model as a design act. The most important design decisions aren't in the UI — they're in what you tell the model it can and can't do.

Trust is load-bearing. In healthcare AI, a user who doesn't trust the output won't correct it — they'll abandon the tool entirely.

Emotional context over task context. Who is this person, at what point in their week, in what state of mind? That determines everything about how the interface should behave.

Design for the feedback loop. AI products that can't be corrected or refined in the moment fail in practice. Control mechanisms aren't features — they're the foundation.

System Design

I made a deliberate choice to keep the system lightweight. This is a tool for a small, specific population. Over-engineering it would have been slower to ship and harder to maintain as a solo builder. The architecture has three layers:

User Input Layer. Structured dropdowns (session type, format, duration, audience) plus free-text for nuance. Two modes because one size doesn't fit all clinical contexts.

Processing Layer. Prompt construction, clinical ruleset enforcement, and contextual interpretation. The model doesn't run free — it's constrained to evidence-based frameworks.

Output Layer. Structured session plans with swappable segments, printable PDF export, library-searchable functionality. Designed for a clinician who has 4 groups today and needs to move.

Build Process

When I say "limited experience" doing this — I was using Cursor and Figma Make while at Dropbox and it just felt so limiting and I never found the output to be compelling — my team didn't actually use what we were generating, but it was nice to play with.

So I started with Claude Code as my primary development environment and used GitHub for version control and Vercel for deployment. Every piece of that sentence was new to me two days before I shipped. If I had a question about something, I asked Claude to ELI5 and I've learned a ton in the last few days. It's been incredibly empowering.

Day 0 Identified the problem, started the brief. Interviews done. Document written. Figma sketches roughed in. No code yet.
Day 1 First environment, first deployment, first failure. Set up VSCode, Claude Code, GitHub repo. Learned what a CLI actually is by using one. Pushed a broken deployment. Figured out how to revert it.
Day 2 Shipped v1. Working session builder, live on Vercel. Claude API connected. Husband used it the same day.
Ongoing Refinement, iteration, continued learning. Real usage producing real feedback. Design refinements, performance tuning, and learning MCPs and engineering fundamentals week over week.
There have been a lot of mistakes. Broken builds, bad pushes, output that didn't match the clinical intent. Each one was a faster teacher than any tutorial. I learned to revert, to debug, to read error output without panicking, and to know when to ask the model versus when to ask myself.
Waypoint build process

Visual Design

Healthcare design research points to blues and greens for contexts requiring trust, calm, and safety.

I chose forest green as the primary accent, warm beige backgrounds, and off-white surfaces with light noise textures to reduce visual fatigue while maintaining WCAG-compliant contrast ratios. The goal was an interface that felt clinical without feeling cold.

Impact

In its first week live, my husband reported saving several hours of prep time and said his sessions were meaningfully better — more structured, more varied, more confident to run. That's real recovered time for a person doing emotionally demanding work. And now we get to enjoy more time and our dinner time conversation has more space for other topics.

"I'm not dreading Sunday night prep anymore."

— A clinical colleague, week one

Reflection

The most important thing this project demonstrated is the product and process. In 2026, a principal-level designer can't afford to treat engineering as someone else's problem. The ability to prototype with real tooling, deploy, iterate on a live system, and learn in production is now a core competency.

Waypoint is the proof of concept for that belief. I identified the opportunity, validated the problem, designed the solution, built it, shipped it, and am improving it week over week based on real usage data. I'm also learning — genuinely learning — agentic coding patterns, MCP integrations, and engineering fundamentals in real time. The project is a long-term investment in my own capability.

Future Roadmap

Every item on the roadmap comes directly from feedback after real use.

  • 1 Multi-account database. Right now Waypoint serves one clinician well. A database layer with individual accounts unlocks the tool for clinical teams and group practices — the natural next unit of scale.
  • 2 Session deduplication engine. The core problem is repeat content. Saving sessions to a per-clinician library and surfacing when content has been used before closes the loop on the original problem entirely.
  • 3 Group repository. Shared session libraries for teams — so clinicians aren't duplicating work across colleagues running the same programs.