juli·james Course Mapper Welcome
Field Course · 18 min

Course Mapper: A Retrospective

A functional prototype for mapping outcomes, modules, activities, and assessments — and the thinking that shaped it.

AuthorJuli James TeamEd Dev CycleFY26 FormatField course
Lesson 01 · The Problem We're Solving

The friction isn't dramatic. It's accumulated.

Course mapping at most institutions is held together by a Word doc template, a chain of emails, and translation work that lives in someone's head. Course Mapper started from naming the workflow that was actually happening — not the one the template was supposed to support.

The manual course mapping workflow starts with a Word doc template that has to be found before it can be used. The template lives somewhere in shared storage, comes in two variants (academic credit course and micro credential build), and has minor per-college variations. The most recent version of whichever variant fits the build gets opened — and the workflow begins.

From there: the high-level course info that doesn't require faculty input gets filled in first. The partially-completed template gets sent to the Subject Matter Expert. Collaboration is one-author-at-a-time and asynchronous — share, wait, return, repeat.

"

The friction in this workflow isn't dramatic; it's accumulated. The chase-and-retrofit cycle to recover overview content, alignment mappings, and content drafts — multiplied across modules and across courses — becomes its own significant time cost.

Where the workflow gets stuck

The template is one large table, structured around what Quality Matters review will need to see on the Canvas side. The structure is reasonable — it's meant to scaffold both the planning conversation and the eventual Canvas build against a known evaluation rubric. But it carries problems the rest of the workflow then has to absorb. Use the tabs below to look at each one.

The table format struggles with content density.

Once modules get bullet-point-heavy or content-heavy, the giant-table-in-a-Word-doc layout becomes unwieldy. CLOs and MLOs typically make it onto the document at the module level, and learning activities and assessments get listed — but the visual alignment between MLOs and the specific activities or assessments that satisfy them isn't drawn out. The alignment is conceptually present in the document; it's not visually present on the page.

QM review on the Canvas side is fundamentally about visible alignment — can you see that MLO 1.1 is covered by Assignment 1? — so the gap between what the template captures and what review will eventually need has to be closed in someone's head.

The template doesn't capture module overview content.

Module overview content is the connective tissue needed to start scaffolding the actual Canvas course structure. The template doesn't ask for it, so faculty rarely provide it in the initial pass.

Even when a faculty member returns a "completed" map, the ID is usually chasing them back for module overview content, explicit MLO-to-activity-or-assessment mappings, and the actual content of the activities and assessments themselves.

Faculty rarely return content in the canonical template.

Some faculty fill out the Word doc as intended; many don't. They deliver their work in whatever format they already think and work in — their own spreadsheets, their own Word documents, or pieces of content scattered across email.

The translation work — into whichever shape will eventually become the Canvas build — falls to the ID. The cognitive load of constantly translating across formats is real, and accumulates across every build.

The collaboration surface is improvised every time.

Beyond the document itself, the build often accretes a Microsoft Teams channel — sometimes one the ID creates, sometimes one the faculty already had and invites the ID into — and sometimes a OneDrive folder, depending on the faculty member's preferred working style.

There is probably an ideal collaboration shape for an ID/SME build. There isn't one ideal shape across faculty right now. The shape gets negotiated for each build, often implicitly.

The result is a workflow whose work product leaks into wherever the most recent conversation happened, rather than into a single canonical artifact. Email becomes a content repository by default — a small but persistent indicator that the template isn't holding what the work actually requires.

Lesson 1 of 4
Lesson 02 · What Course Mapper Changes

Four shifts that remove work that doesn't have to exist.

Course Mapper's speed gains don't come from doing the same work faster. They come from removing chase-and-retrofit work that wouldn't exist if alignment were captured up front — and from making the resulting artifact better, separately from speed.

Where the time savings come from

Four specific steps in the workflow get cheaper. For each one, the manual baseline is what currently happens; the Course Mapper version is what the tool's data model makes possible.

1

Module-level MLO-to-activity/assessment mapping

Manual baseline

Faculty return a "completed" map that lists activities and assessments but doesn't explicitly link each one to the MLOs it covers. The ID either infers the linkage and tracks it mentally, or sends clarification emails back to faculty.

With Course Mapper

The data model requires the MLO linkage at the point an activity or assessment is created. The mapping is captured once, at build time, by the person who actually knows. Eliminates the post-hoc clarification email cycle.

2

Module overview content collection

Manual baseline

The Word doc template doesn't prompt for module overview content, so faculty rarely provide it in the initial pass. The ID chases faculty for overview text after the fact, often after the rest of the module structure depends on it.

With Course Mapper

Module-level overview is part of the data model. Faculty can provide it during the initial pass; if they don't, the gap is visible immediately rather than discovered later — when it's cheapest to address.

3

Format translation

Manual baseline

Faculty deliver content in spreadsheets, Word docs, scattered emails, or improvised Teams shares. The ID translates each variant into whichever shape will eventually become the Canvas build.

With Course Mapper

The data model is the Canvas-ready shape. Content references via links keep the canonical artifact stable even when content lives in Drive, OneDrive, Canvas, or elsewhere. Translation work is replaced with reference work.

4

Structural build vs. content collection

Manual baseline

Content gaps block or delay structural Canvas build, since structure and content are entangled in the same Word doc template.

With Course Mapper

The map is structural data. Activities and assessments can exist as placeholders with links attached as content arrives. Content streams in over weeks without blocking the structural work.

Quality improvements — separate from speed

Course Mapper produces a meaningfully better artifact than the manual process, in four ways that are separate from raw speed.

The Word doc template captures CLOs, MLOs, and activities as separate cells in a giant table; the linkages between them live in someone's head or in side notes. Course Mapper makes alignment first-class data — every activity and assessment is connected to the MLOs it covers, and every MLO is connected to the CLOs it supports.

The Alignment Report consumes this data to show MLO coverage across modules, surfacing gaps that are nearly impossible to see in the Word doc table. QM-review-ready alignment is built in from the start, not retrofitted at the Canvas build stage.

The manual workflow tends to leak its work product into wherever the most recent conversation happened — email threads, improvised Teams channels, a faculty member's personal spreadsheet.

Course Mapper centralizes the structural decisions into one artifact (the JSON export, plus the .docx export for hand-off audiences). Content can still live wherever it actually lives — Drive, Canvas, OneDrive — but the map points to it. The artifact stays stable; the content is referenced, not absorbed.

Same data model, same export shape, every time. This matters for the ID who picks up a colleague's course mid-build, for a new hire who needs to learn what a course map looks like, and for QM review on the Canvas side.

The Word doc template aspired to this consistency but couldn't enforce it — variations crept in. Course Mapper enforces the shape because the shape is the data structure.

The Word doc template can't produce a clean, consolidated read of an entire course's alignment without being printed, scrolled through, and squinted at.

Course Mapper provides two consolidated views: the Full Map slide-over panel inside the builder (useful during build, for catching gaps without leaving the editor), and the standalone Viewer route (useful during hand-off — a faculty member or reviewer can upload an exported JSON and see the map in clean read-only form without learning the builder).

"

The information was always required for QM review on the Canvas side. Course Mapper just collects it at a different — and cheaper — point in the workflow.

Lesson 2 of 4
Lesson 03 · Scope Honesty

Course Mapper is a workflow accelerator, not a substitute for instructional design judgment.

Several things still require the same effort with or without the tool. Naming what the tool doesn't solve is part of what keeps the work load-bearing.

Course Mapper maps content; it doesn't generate it. Faculty (and IDs supporting them) still write the learning activities, assignments, readings, and overview text — in whatever tools they actually create in, whether that's Word, Canvas's rich-text editor, Google Docs, or scribbled notes.

The tool points to that content via links; it does not produce it, edit it, or proofread it.

The underlying "the faculty member has time to invest, and is engaged enough to invest it" problem is not a tool problem. Course Mapper can wait elegantly — auto-save preserves the in-progress map; content can stream in over weeks — but it can't make a busy faculty member respond faster.

The chase doesn't disappear. It just happens around content delivery rather than around alignment clarification.

Choosing the right assessment for a given MLO, scaffolding activities appropriately, calibrating cognitive load across a module — these are judgment calls made by IDs and faculty.

The tool surfaces and stores the result of that thinking; it does not replace the thinking. A bad course map and a good course map can both be valid Course Mapper outputs, and the tool can't tell which is which.

Course Mapper accepts links to content wherever it lives, which substantially reduces the cost of meeting faculty in their preferred tools. But the tool still has to be used by someone — either the faculty member or the ID acting on their behalf.

Faculty who strongly prefer their own spreadsheets aren't forced into Course Mapper; the mapping work simply shifts onto the ID, who then translates the spreadsheet into the map. The translation cost is lower than it was in the canonical-template era, but it isn't zero.

"

Naming what the tool doesn't solve is part of what keeps the work load-bearing. Recommendations are more credible when they're not universally enthusiastic.

Lesson 3 of 4
Lesson 04 · The Thinking Behind the Design

A staff-led tool is two projects, not one.

Course Mapper's design choices come from a framework called the Two Halves of Staff-Led Innovation — the recognition that the build half of a staff-led tool and the adoption half are not symmetric, and have to be planned differently.

Companion framework

This lesson draws on The Two Halves of Staff-Led Innovation — a separate field guide that generalizes the pattern Course Mapper revealed. Course Mapper is the first project the framework was designed to be applied to outside of its origin case.

Two halves, two cost curves

AI-assisted development has dramatically lowered the cost of building a staff-led tool. The cost of getting it adopted has not budged. The two halves have different cost curves, different owners, and different failure modes — and Course Mapper's design choices reflect that.

The build-half discipline that shaped Course Mapper.

Course Mapper's scope is the smallest version that solves the alignment problem honestly. No server. No real-time collaboration. No content hosting. The tool references content rather than absorbing it — which keeps the build small and the maintenance burden proportional.

Persistence is local (browser auto-save) plus JSON export. The durable artifact is a file the user keeps wherever they already archive work. Cross-user collaboration is async only — Author A exports, Author B imports.

These aren't limitations to apologize for. They're the build-half choices that keep the tool maintainable by a small team without sacrificing the workflow it's meant to support.

The adoption-half landscape Course Mapper enters.

Faculty arrive with established workflows that predate the tool. Spreadsheet preferences, Teams channels, OneDrive folders — these are patterns built over years. A new tool enters this space competing against habit, not vacuum.

There isn't one ideal workflow across faculty. The right collaboration shape varies by faculty member, by college, by course type. The build-half side can produce "the right tool" for an idealized user; the adoption-half side has to negotiate with N actual users.

Course Mapper's link-based design is a deliberate adoption-half accommodation. Accepting URLs to content wherever it lives reduces the cost of meeting faculty where they already work, rather than asking them to relocate.

"

Whether the link-based accommodation is sufficient — whether the activation cost of using a new tool at all is low enough relative to the friction it removes — is the open empirical question beta testing should answer.

Knowledge Check

Course Mapper accepts URLs to content (in Canvas, Drive, OneDrive, etc.) rather than hosting the content itself. According to the design rationale, what problem does this choice address?

  • It makes the tool faster to use. Fewer screens means less time per build.
  • It reduces the activation cost of adoption by meeting faculty where their content already lives, rather than asking them to relocate it.
  • It removes the need for content review. If content lives elsewhere, the tool doesn't have to evaluate it.
  • It lets the tool work offline. Linked content stays accessible even when the tool isn't connected.

Where this work goes next

Course Mapper closes FY26 as a functional prototype with a clear scope-honesty story. The FY27 bucket has been deliberately separated: revised visual interface, platform-agnostic framing, fall 2026 ACC faculty pilot, and a content-continuity feature that emerged from the AWS and Canvas outages of FY25–FY26.

What beta testing will surface is whether the link-based design's adoption-half accommodation is enough to bridge the gap between "willing to give feedback" and "willing to invest sustained time in a real build." If it is, Course Mapper's adoption story moves meaningfully past the pattern its predecessor tool revealed. If it isn't, the framework that named the gap is still earning its keep — by making the gap visible, on purpose, rather than discovered later.

"

The point isn't to prevent staff-led innovation. It's to make sure what gets built does the work it was meant to do — and that what doesn't get the support to be adopted is at least known not to, rather than discovered later.

End of course