An Unorganized Organizer
On paper, the existing Session Organizer had everything a teacher needed: lesson materials, supplementary resources, and planning tools, all in one place. In practice, it was an undifferentiated pile. The v1 experience handed teachers a room full of furniture and expected them to arrange it themselves.
That's a solvable problem on its own. What made it thornier was the diversity of our users. A new teacher needs a guide. An experienced teacher needs the interface to get out of their way. Designing for one too heavily meant abandoning the other but we'd been, perhaps unconsciously, designing for neither.
Understanding the Problem Space
Before touching a wireframe, we invested a week in Montreal doing something that's easy to skip when timelines are tight and teams are used to working remotely: actually understanding the problem space from the ground up. That meant getting into our teachers' full daily reality. And not just the product moments, but the before-school planning, the after-hours grading, the moments where our platform fit into a broader ecosystem of tools they were already using. We ran affinity mapping sessions to cluster themes from our research, storyboarded user journeys to stress-test assumptions, and used the whiteboard time to generate and pressure-test feature ideas as a cross-functional group.
We didn't leave Montreal with a finished design, but left with something just as valuable in a focused, agreed-upon set of requirements and a shared mental model of who we were designing for and why.
List of Content → List of Actions
The central insight from our research was a shift in how we understood the product's job. The v1 experience was organized around content, eg. here's everything that exists, go find what you need. What teachers actually wanted was something organized around actions — here's what to do, in order, right now.
But that reframe only held for part of our user base. New teachers wanted a step-by-step script. Experienced teachers wanted the flexibility to break from it, customize their sequence, and make the tool work around their existing instincts rather than replacing them.
That dual need became the design constraint everything else had to satisfy. Not a checklist or a flexible explorer, but both, coexisting in the same interface.
Zones of Focus
Our first priority was reclaiming the interface from its own complexity. The v1 felt utilitarian to the point of austerity, functional but not inviting or legible at a glance.
We moved to a card-based layout as the primary organizing unit. Cards did several things at once:
- They gave each piece of content visual breathing room.
- They allowed us to embed contextual images and at-a-glance student progress metrics directly into the content object.
- And they created a navigational affordance that allowed teachers to "zoom in" to a card to go deeper, or zoom back out to reorient.
For teachers who'd described the old experience as overwhelming, this alone was a meaningful shift.
Choose Your Own Path
A learning platform is a deep content hierarchy with modules, units, lessons, sessions, and resources all nested inside the next. Getting lost in such an environment isn't a user failure, it's a design failure.
We addressed this with persistent wayfinding woven throughout the experience: 'Last Taught' indicators on cards, completion states visible at every level of the hierarchy, and a collapsible side navigation that could serve as a quick map of the full content structure when teachers needed it and disappear when they didn't.
For our experienced users, this gave them the confidence to chart their own path without losing their place in the process. For newer teachers, it provided a clear thread of their teaching journey.
Checklist vs Dynamic List
With the spatial orientation problems solved, we could tackle the core tension head-on: how do you serve a new teacher who needs guidance and a veteran who needs flexibility in the same interface?
The answer was two layers that worked from the same data. Every piece of content had an associated action checklist of specific tasks tied to that content, with timing guidance indicating whether each step was most useful before or after teaching the session. This gave experienced teachers a quick reference: a scannable, contextual list of what to do and when, without prescribing a rigid sequence.
For newer teachers, we built what we called the Command Center. This was a dynamically generated, sequential task queue that pulled from all of those content-level checklists and organized them into a single, prioritized "what to do next" list. Teachers who wanted to could run their whole week entirely from this view, with tasks that would cross off and archive as completed, a queue that would reprioritize automatically, and embedded progress tracking widgets that gave them real-time insights as to which students and skills needed extra attention.
AI-Assisted Development
Getting this design into user testing presented a specific challenge. The interactions we needed to validate (such as skipping sessions, watching the Command Center queue update in response, seeing progression indicators recalculate, etc.) were too conditional and interdependent to prototype cleanly in Figma. Standard prototyping tools are great for linear flows, but this wasn't a linear flow.
We made a deliberate call to prototype with AI-assisted development instead. With some iteration on the prompting and structure, we were able to build a high-fidelity, genuinely functional testing artifact that let teachers interact with the real logic of the system, not just a simulation of it. Users could skip a session and watch downstream tasks update. They could navigate between the card view and the Command Center and see their state persist.
The tradeoff was that it worked almost too well. Several testers initially assumed they were looking at the finished product, which introduced some noise into the feedback sessions. That was a calibration problem worth noting and learning from. When your prototype is indistinguishable from a real product, you have to work harder to remind people they're testing a concept.
Feedback and Next Steps
User testing validated the core directions we'd been designing toward. Testers across experience levels found the navigation intuitive and the dual-layer approach addressed the exact friction points that had driven them toward other tools in the first place.
Teachers described the experience as visually clear, easier to orient within, and less exhausting than the original. One tester called it "visually genius...so appealing, and I love how it's just all right there on one page."
What that feedback confirmed, more than anything, was the value of the upfront research investment. The design decisions that resonated most with users — the card-based navigation, the dual-mode task system, the persistent wayfinding — all traced directly back to requirements we'd defined in Montreal, before a single pixel was designed.