LearnBuilder
Back to blog

Will Claude Design replace e-learning authoring tools?

Peter
AIauthoringinstructional designinteractive slidesLearnBuilder
Will Claude Design replace e-learning authoring tools?

Claude Design shipped into Anthropic Labs on April 17. It is a real product, not a chat-window improvisation. You describe a design — a prototype, a wireframe, a pitch deck, a marketing asset — and it generates one inside a dedicated visual-collaboration interface running on Claude Opus 4.7. You can leave inline comments, edit elements directly, nudge things with custom adjustment sliders, apply your organization's design system automatically, and export to Canva, PDF, PPTX, or HTML. Designs hand off to Claude Code when they need to become real software.

The first time you see it generate a working interactive prototype from a paragraph of description, the natural question follows quickly. If Claude Design can do this, why does anyone still pay for Articulate Storyline, Adobe Captivate, iSpring, Rise, or any of the other authoring tools that sit at the centre of the standard L&D toolchain? The first impression makes the answer feel obvious: they don't, not for long.

I have been building LearnBuilder for the last year, so I have a stake in this question. I have also spent enough time pushing the limits of what generated artifacts can actually do in a course context to know that the easy answer is wrong in both directions. Claude Design is genuinely going to absorb work that authoring tools used to handle, especially the visual-design and prototyping side. It is also pointed at a different problem than e-learning, and the parts it does not solve are the parts L&D teams cannot ship without.

What Claude Design actually is — and is not

It is worth being precise about the product, because a lot of the discussion around it has blurred into a general "AI replaces design tools" frame that misses what Anthropic actually built.

Claude Design is positioned for designers, product managers, founders, marketers, and non-designers who need to produce visual material. The output categories listed in the launch are realistic prototypes, product wireframes, design explorations, pitch decks, and marketing collateral, with code-powered prototypes that can include voice, video, and 3D. The handoff path to engineering is Claude Code. The collaboration model is organization-scoped sharing with design-system enforcement.

What it is not is an e-learning tool. There is no concept of a learning objective, a knowledge check, a retrieval-practice activity, a branching scenario tied to a competency, a learner state that survives across sessions, a SCORM or xAPI export, or an LMS integration. None of this is a criticism — Claude Design is not trying to do those things. It is the e-learning gap, and it is the gap that matters when you ask whether the tool replaces an authoring platform.

What Claude Design does genuinely well — and undermines about traditional authoring tools

Three pieces of Claude Design hit at the heart of what most authoring tools used to charge for.

Generation quality at the surface. A prototype that runs in the browser, looks competent, and demonstrates the interaction logic correctly — produced from a paragraph of description in well under a minute. For the kind of one-off visual artifact that a designer used to spend two days on, this is a different category of work, not a marginal speed-up.

Direct editing of generated output, not re-prompting. This is the piece that surprised me. The first wave of generated-artifact tools forced you to nudge by re-describing what you wanted, which usually triggered a partial regeneration, which usually changed things you did not want changed. Claude Design exposes the elements directly. Inline comments, direct manipulation, adjustment sliders — you fix the thing yourself instead of gambling on the next round. That closes the iteration problem that breaks the chat-window workflow.

Design systems applied automatically. Brand consistency was a real moat for authoring tools — your Storyline output looked like your Storyline output because the template enforced it. Claude Design folds your organization's design system into generation directly, so the artifact lands inside your visual identity from the first draft. Combined with org-scoped sharing, the consistency-and-collaboration argument that traditional template tools used to make is now table stakes in a different product.

If you are an L&D team building a single high-craft prototype, a marketing asset for a learning campaign, a pitch deck for a programme, or an exploratory wireframe — Claude Design is now in the toolkit, and it should be. The question is whether that is the same as the work an authoring tool actually does.

What e-learning needs that visual design does not

Here is where the analysis splits. Claude Design is a visual collaboration tool. E-learning has visual-collaboration needs, but the visible surface of an interaction is maybe a third of the work. The rest is the part that does not show up in a prototype review.

Instructional design as a constraint, not a style. Learning content is structured by learning objectives, sequenced for retrieval and transfer, balanced across content types so a learner is not reading text blocks for twenty minutes. A general visual tool has no model of any of this. It will produce a beautiful pitch deck on photosynthesis. It will not insist that the deck include knowledge checks, that the knowledge checks test recall before recognition, that the scenario activity is anchored to a measurable behaviour, or that the lesson does not exceed cognitive-load thresholds. That is the instructional-design layer, and it is the layer that decides whether the content actually teaches.

Persistence of learner state. A learner who closes a tab in the middle of a branching scenario should resume where they left off. A learner who spent fifteen minutes building a configuration in a simulation should find it on return. Generated prototypes do not get session persistence as a default, and there is no equivalent of cmi.suspend_data in a Claude Design export.

Reporting and grading. For graded interactions, the platform needs to know what the learner did. Did they reach the optimal configuration? Did they explore the full range, or click once and leave? Did they choose the best answer, the second-best, or the trap option? A hand-built interaction wires this up deliberately. A generated prototype does not have a contract for what to report — and there is no LMS at the other end to receive it anyway.

SCORM, xAPI, cmi5, AICC. Corporate L&D runs on tracking, and tracking only works when the content reports it correctly through the integration format the LMS speaks. Export to PDF, PPTX, HTML, or Canva — all useful — does not produce a SCORM package or an xAPI statement. This is unglamorous plumbing, and it is the difference between content you can demo and content you can ship into a corporate compliance programme.

Accessibility maintained over a course, not at the prototype level. WCAG AA across forty lessons is a sustained property, not a one-time review. Missing alt text, hover-only interactions with no keyboard equivalent, missing captions, contrast failures — these are easy to introduce and easy to miss when the focus is on the interaction. A learning platform needs ongoing accessibility checking; a design tool checks at the point of design.

Course-scale coherence. A pitch deck is one artifact. A course is fifteen lessons that share an instructional spine, sequence toward a competency, share a visual identity, share a glossary of terms, share a tone, and report into a single learner record. Even with a strong design system, generating each lesson independently in a visual tool produces fifteen pretty artifacts with no spine connecting them.

None of these gaps are weaknesses of Claude Design. They are simply outside the problem it set out to solve.

Where this leaves authoring tools

Two things are happening at once, and they look contradictory until you separate them.

The first: traditional template-driven authoring tools are in real trouble. Tools whose value proposition was "we built a drag-and-drop interaction library so you don't have to code, and our templates enforce consistency" are now competing with a model that generates the interaction, applies your design system, and lets you edit directly. The interaction library was the moat. Generation dissolves it. The tools whose pricing pages still emphasise "200+ ready-made templates" are about to find that this is not the value any more.

The second: the e-learning-specific layer — instructional design opinions, learning-objective scaffolding, knowledge checks generated alongside content, SCORM and xAPI export, LMS integration, learner-state persistence, course-scale coherence, ongoing accessibility — is not solved by a general visual design tool, because it was never the same problem.

The authoring tools that survive will be the ones that absorb generation rather than resist it, and that focus on the L&D-specific layer that a general tool will not build. That means a different shape of product: AI inside the editor rather than a separate workflow you copy-paste from. Generated interactions that are editable, not opaque. The plumbing — tracking, persistence, accessibility, reporting, branding, collaboration, LMS export — handled by the platform so the designer can focus on the learning design.

This is what LearnBuilder is built around. The piece I want to talk about specifically is the interactive slideshow builder, because it is where generation and editing meet most directly — and it is the closest thing in an e-learning tool to what Claude Design does for visual design.

Editing AI-created interactions in the LearnBuilder slideshow builder

The LearnBuilder slideshow builder shares a core idea with Claude Design: generated output should be editable in the same surface as hand-built output, not trapped behind a re-prompt loop.

When the AI generates an interaction inside a LearnBuilder slide, it does not produce one opaque blob. It creates elements — shapes, animations, bindings, text, images, controls — in the same element tree you build with by hand. After generation, you can select the rotor on a windmill simulation and change its colour. You can grab the slider and reposition it. You can resize the base, change the typography of a label, tweak the easing of an animation. The same controls that work for hand-built elements work for AI-built ones, because they are the same elements. This is the difference between editable and re-promptable: you fix the thing yourself, with direct manipulation, in seconds, without spending tokens or waiting for a regeneration.

The second piece is element-scoped AI. You can select a single element — this rotor — and ask the AI to modify just that. "Spin more slowly." "Change the pivot to the top of the pole." The model gets scoped context, not the whole slide. Tokens stay low. The rest of the slide is guaranteed not to change because the AI literally cannot touch it.

The third piece is exposed code. When the AI writes JavaScript for a custom interaction, that code is saved as a custom JS interaction on the slide. It is not hidden behind an opaque "AI layer." If the AI gets you eighty percent of the way, a developer on your team can open the code, read it, and cover the last twenty. The same hatch is available without AI: hand-write the JS for an interaction that has no template, paste it in, and it runs inside a properly structured course with all the surrounding infrastructure handled by the platform.

This is the same product shape Claude Design uses for visual design: generation, direct editing, element-level scoping, and a hatch to code. The difference is everything around it. In LearnBuilder, the generated interaction lives inside a lesson that lives inside a course that exports as SCORM 1.2, SCORM 2004, xAPI, or cmi5; that respects an account-level brand configuration applied to every lesson; that is checked by a built-in accessibility scanner before publish; that supports real-time multi-author collaboration; that tracks learner progress and reports completion to the LMS; that persists learner state across sessions; and that generates retrieval questions and scenario activities alongside explanatory content because the AI is working from instructional principles, not visual prompts.

Generation is the same idea. The platform around it is what makes it e-learning rather than a prototype.

Where Claude Design genuinely wins for L&D

The honest caveat. Even within an L&D team, there are jobs Claude Design is now plainly the right tool for.

Pitch decks for new programmes. Marketing assets for a course launch. Internal wireframes for a new learner experience that has not yet been built. Stakeholder prototypes that need to be polished and on-brand without engineering involvement. High-craft one-off interactions where a designer wants total control over the visual language and is prepared to handle the integration manually. The handoff path to Claude Code is also genuinely useful when an interaction needs to become production code rather than stay a prototype.

For the work of producing courses — the day-job of an instructional designer in a corporate L&D function — Claude Design is not the tool, because it is not trying to be. The right combination is Claude Design for the visual-design adjacent work and a generation-native authoring tool for the courses themselves.

Replacement is the wrong frame

Claude Design is not going to replace e-learning authoring tools. It is going to replace the parts of authoring tools that were template libraries dressed up as products — and, for a while, the parts of design tools that were template libraries dressed up as products. The parts that survive in e-learning — the parts worth paying for — are the parts that turn out to be e-learning-specific: instructional design opinions, learning objectives, retrieval practice, SCORM and xAPI, LMS integration, learner-state persistence, course-scale coherence, ongoing accessibility.

An authoring tool built around the same generation-plus-direct-editing idea Claude Design uses, but pointed at courses rather than prototypes — and bringing the L&D-specific plumbing — ends up doing what neither Claude Design alone nor a traditional template tool can do.

That is the bet LearnBuilder is making. Anthropic just made the case for it more clearly than I would have.


If you want to see what generation-inside-the-editor looks like in practice, the free trial includes the AI coding block in the interactive slideshow builder. Related: Building an AI coding block into LearnBuilder and LearnBuilder vs vibe-coding your own learning content.