Design 13 min read

Design Thinking Workshops: 12 Innovative Activities and a 2026 Workshop Playbook

Innovative design thinking workshop activities combine classic methods like Crazy 8s and How Might We with AI-assisted research synthesis, video personas, and rapid prototyping. Complete 2026 playbook with 12 activities, a sample 1-day agenda, common mistakes, and how to measure workshop success.

JM
Justin McKelvey
May 13, 2026

A great design thinking workshop pairs structured ideation with hands-on activities that force participants out of presentation mode and into prototype mode. The most innovative workshops combine classic methods — Crazy 8s, How Might We, empathy maps — with newer formats like AI-assisted research synthesis, remote whiteboard collaboration, video persona interviews, and rapid-clay prototyping. The activities that actually move teams beyond consensus-meeting are the ones that produce a tangible artifact every 30-45 minutes, force divergent thinking before convergent voting, and end with a clear next-action commitment per participant.

This is the 2026 working playbook we use at SuperDupr when we run design thinking workshops with product, marketing, and operations teams. We cover what these workshops are actually for, twelve activities worth running, a sample one-day agenda, the facilitation discipline that keeps a workshop from devolving into a talking-shop, the mistakes that quietly waste everyone's afternoon, and how AI tools have reshaped the format in the last 18 months. Each section names the specific activity, the artifact it produces, and the failure mode to watch for.

Key Takeaways

  • Innovative design thinking workshops combine classics (Crazy 8s, How Might We, empathy maps) with 2026 formats — AI-assisted synthesis, video personas, remote whiteboards, rapid prototyping in Figma AI/v0/Bolt.
  • The single best test of an activity is "does it produce a tangible artifact in under 45 minutes?" — everything else is a meeting in disguise.
  • A workshop is a forcing function for cross-functional alignment and creative output, not a substitute for strategy or research.
  • Group size matters more than tool stack — 6-10 participants is the sweet spot; 15+ is a panel discussion with sticky notes.
  • The workshop is worth it only if every participant leaves with a named action and a deadline. No follow-through, no value.

What Design Thinking Workshops Are Actually For

A design thinking workshop is a time-boxed, facilitated session that takes a team through some subset of the canonical five stages — empathize, define, ideate, prototype, test — and produces tangible outputs (sketches, prototypes, prioritized lists, decisions) by the end of the session. The format originated in the IDEO and Stanford d.school tradition and has since been adapted into design sprints (Google Ventures), Lightning Decision Jams (AJ&Smart), and dozens of other variants. The common thread is structured time-boxing plus hands-on activity plus divergent-then-convergent thinking.

What workshops are actually good at: problem framing when a team is talking past each other, building shared empathy for a user the team has assumed they understood, generating a much wider set of ideas than free-form brainstorming produces, prototyping fast enough that you can test something within a week, and forcing decisions that have been deferred for months. The right workshop turns a meandering Slack thread into a prioritized backlog and a prototype the team can react to.

What workshops are not: a substitute for actual user research, a strategy retreat, a kickoff meeting with extra sticky notes, or a way to manufacture consensus on a decision someone has already made. The fastest way to ruin a workshop is to use it for any of those — participants smell it within the first hour, and the rest of the day becomes a polite performance. Be honest with yourself about whether you need a workshop or a different format entirely. Nielsen Norman Group's primer on design thinking is a good sanity check before you commit a day of senior-team calendar.

12 Innovative Activities for Design Thinking Workshops

The activities below are the ones that have survived in our facilitation rotation across roughly 50 client workshops at SuperDupr. They mix classics — because the classics work — with newer formats that lean on remote whiteboards, AI synthesis, and faster prototyping tools. Each produces a tangible artifact. Each can be run in 25-45 minutes. Each forces divergent thinking before convergent voting.

  1. Crazy 8s — Sketch 8 Ideas in 8 Minutes

    The canonical fast-ideation exercise. Each participant folds a sheet of paper into 8 panels and has 60 seconds per panel to sketch a different solution to the framed problem. Eight panels, eight ideas, eight minutes total. The pressure is the point — the first three ideas are obvious, the fourth and fifth are derivative, and ideas six through eight are where the team gets weird and useful. Participants share, the room dot-votes, the top 2-3 ideas advance to a longer "solution sketch" round.

    Crazy 8s works because it forces volume before judgment, which is exactly the cognitive trap most teams fall into in unstructured brainstorming. It also works because it produces a literal physical artifact — eight sketches per person, twenty people, that's 160 ideas on the wall before lunch. Run it remotely in Miro or FigJam with the same time-box; the constraint matters more than the medium.

  2. How Might We Reframing

    Before ideation, reframe the problem statement as a series of "How Might We…" (HMW) questions. A team that walks in with "our checkout conversion rate is too low" gets stuck on the same five interventions everyone has already debated. A team that reframes into HMWs — "How might we reduce the cognitive load of the checkout form?", "How might we let returning customers skip steps?", "How might we make shipping cost visible earlier?" — gets meaningfully different solution spaces to explore.

    Run HMW as a 15-minute exercise. Each participant writes 3-5 HMW statements on sticky notes, the group affinity-clusters them, and the room dot-votes on which 2-3 HMWs are the most generative for the rest of the workshop. The chosen HMWs become the brief for every subsequent ideation round. Bad HMWs are too narrow ("How might we move the shipping field two pixels left?") or too broad ("How might we delight customers?"). Coach toward the middle.

  3. Empathy Maps With Real User Video Clips

    The classic empathy map — Says / Thinks / Does / Feels quadrants for a target user — works far better when the team is reacting to 60-90 seconds of actual user video than to imagined personas. Before the workshop, the research team pulls clips from past user interviews (or runs a fresh 30-minute session and clips the highlight reel). At the workshop, each table watches 2-3 clips, then fills in the empathy map collaboratively, then shares the map with the rest of the room.

    The video clips do the heavy lifting. They short-circuit the "I think users probably want X" reflex that derails most empathy work. They give skeptics in the room something specific to react to. And the resulting empathy maps are noticeably less generic than the ones produced from memory alone. If you can't get fresh user video, customer support call recordings (with permission) are an excellent substitute.

  4. AI-Assisted Persona Synthesis

    This is the 2026 update on persona work that has changed our workshops the most. Before the session, paste the team's raw research notes — interview transcripts, support tickets, survey open-ends, sales-call recordings — into ChatGPT or Claude with a structured prompt: "Cluster these notes into 3-5 distinct user types. For each, write: their job-to-be-done, top 3 pain points, top 3 motivations, and a representative quote." The model produces a draft set of personas in 5-10 minutes that would have taken a research team two days.

    The catch: the AI-generated personas are a starting point, not the answer. The workshop's job is to react to them — challenge, refine, kill the ones that don't ring true, merge the ones that overlap. This is where the team's domain knowledge actually adds value. The AI compresses the synthesis time so the workshop can spend its hours on judgment, not transcription. Pair this with the empathy-map activity above for a tight one-two punch.

  5. Reverse Brainstorm — "How Could We Make This Worse?"

    When ideation stalls — and it always stalls around the 3rd or 4th hour — flip the prompt. Instead of "how might we improve the onboarding flow," ask "how might we make the onboarding flow as bad as possible? What would actively drive users away?" The room laughs, the energy returns, and within 20 minutes you have a list of 30-40 anti-patterns. Then invert each one to find what good looks like.

    Reverse brainstorming works because criticism is cognitively easier than creation. The team that can't list ten more "ways to improve" can effortlessly list twenty "ways to ruin." And the inversion step often surfaces solutions the team had blind spots around — "don't ask for credit card upfront" sounds obvious in retrospect, but it's much more likely to come out of "let's require a credit card on signup with no trial" than "let's improve activation."

  6. Rapid-Clay or Paper Prototyping

    The original "low-fi prototyping" activity, and still one of the best. Hand each table modeling clay, cardboard, scissors, tape, and markers. Give them 25 minutes to build a tangible representation of their proposed solution. For digital products, paper sketches stapled into a "click-through" booklet work the same way — each page is a screen, the user "taps" by being handed the next page.

    The medium matters less than the constraint. Clay, paper, and cardboard are deliberately low-fidelity to keep participants from getting precious about the artifact. Nobody falls in love with a clay prototype the way they fall in love with a Figma mockup, which means the room is much more willing to throw it away and start over. We use this even for software workshops — the act of physically building something with your hands changes how people talk about the design.

  7. Storyboard the User Journey

    Give each table a strip of 6-8 panels and have them draw a comic-strip narrative of the user moving through the experience. Start before the user encounters the product, end after the desired outcome. The panels force the team to think about emotional state, friction points, and "what happens when…" branches that a flowchart hides. A good storyboard has at least one panel where the user is confused or frustrated — if everything is sunshine, the team isn't being honest.

    Storyboards work especially well when paired with the empathy map and video-clip activity earlier — the team has a real user in mind and a real emotional context. We've used this for ecommerce checkout redesigns, B2B SaaS onboarding flows, and even internal-tool refactors. The storyboard becomes the spec the engineering team actually builds against. See our companion guide on ecommerce website best practices for what these journeys look like applied to real checkout flows.

  8. Dot Voting and Heatmaps for Prioritization

    After ideation, the room has 40-200 ideas on the wall. The team needs to converge fast. Give every participant 4-6 sticky dots (different colors for different criteria — impact, effort, novelty) and 5 minutes to vote. The ideas with the densest dot clusters advance. Heatmaps are the digital equivalent in Miro or FigJam — each participant drops emoji reactions on the ideas they want to move forward, and the visual density of reactions is the prioritization signal.

    Voting works only with clear criteria. "Vote for the ideas you like" produces a popularity contest. "Vote for the three ideas that would have the biggest impact on activation in the next quarter" produces a usable list. Force the criterion into the prompt. And do silent voting first — no discussion until votes are placed — to prevent the loudest voice in the room from anchoring everyone else.

  9. Remote Whiteboard Sprints

    Hybrid and fully-remote workshops are the 2026 default, not the exception. Miro, FigJam, and Lucidspark all support the full design-thinking toolkit — sticky notes, voting, timers, templates, video embeds. Run the same exercises (Crazy 8s, HMW, empathy maps) in the digital tool with strict time-boxes and a dedicated facilitator who manages the canvas. Cameras on for the social pressure that keeps people present.

    The discipline that makes remote workshops succeed: pre-built templates so you don't waste 20 minutes setting up the canvas live, a co-facilitator handling chat and breakout rooms while the lead facilitator runs the activity, and frequent 5-minute breaks because Zoom fatigue is real. Remote workshops can match in-person quality, but only if the facilitation is materially more disciplined.

  10. Role-Play the User

    Assign each participant a user persona and have them physically walk through the experience as that user — narrating what they're seeing, thinking, and trying to do at each step. For digital products, project the actual interface and let the "user" click around with the team narrating their confusion. For physical or service experiences, set up the props and have them act it out.

    Role-play feels awkward for the first three minutes and revelatory for the next thirty. The team that has been arguing about whether the onboarding is "too long" suddenly watches a teammate fumble through it as a confused first-time user, and the conversation changes. Pair this with a "Wizard of Oz" prototype — one teammate manually plays the role of the AI / backend / system from behind the curtain — to test concepts before any code is written.

  11. Pre-Mortem — "Imagine the Project Failed. Why?"

    Borrowed from Gary Klein's decision-research work. Before committing to a solution, run a 30-minute pre-mortem: "It's six months from now. The project has failed completely. Write down all the reasons why." Each participant lists 5-10 failure modes silently, then the room shares and clusters them. The top 5 failure modes become the risks the team explicitly mitigates in the rollout plan.

    Pre-mortems work because they give skeptics permission to voice doubts they would otherwise self-censor. The framing — "imagine it already failed" — bypasses the optimism bias that haunts most kickoff meetings. We've never run a pre-mortem that didn't surface at least one risk the team would have otherwise missed, and roughly half the time the surfaced risk changes the plan materially.

  12. Lightning Decision Jam (LDJ)

    AJ&Smart's compressed decision-making format. Use this when the workshop needs to produce concrete next-week actions, not just ideas. The flow: (1) write down problems silently — 4 min, (2) cluster and dot-vote — 5 min, (3) reframe the top problem as a How Might We — 3 min, (4) silent solution-sketching — 7 min, (5) dot-vote on solutions — 3 min, (6) effort/impact matrix the top solutions — 5 min, (7) commit to the top 1-2 as 2-week experiments with named owners — 5 min. Total: about 35 minutes.

    LDJ is the activity we run when a team has been spinning on a decision for weeks. It works because every step is silent before it's discussed, the time-boxes are aggressive, and the output is always a specific named action with an owner. It's not a substitute for deeper research when you genuinely don't know enough yet — but for the "we keep talking about this and never deciding" problem, it's the sharpest tool in the kit.

A Sample 1-Day Design Thinking Workshop Agenda

The agenda below is the default one-day workshop we run with cross-functional teams (product, design, engineering, marketing, customer success). Six to ten participants, one facilitator, one co-facilitator handling artifacts and time. Adapt the activity choices to the problem at hand — the time blocks are deliberately conservative because activities always run long.

Time Activity Output
9:00 - 9:30 Welcome, problem framing, success criteria, ground rules One-sentence problem statement, agreed success criteria, list of explicit non-goals
9:30 - 10:30 Empathy maps with real user video clips (3 clips, 3 personas) Three completed empathy maps on the wall, shared aloud
10:30 - 10:45 Break
10:45 - 11:15 How Might We reframing + dot vote on top 3 HMWs 3 prioritized HMW statements as the brief for ideation
11:15 - 12:15 Crazy 8s + solution sketch round (per chosen HMW) ~80 sketched ideas, top 3 solution sketches per HMW advanced
12:15 - 1:15 Lunch (working — informal discussion of morning ideas)
1:15 - 2:15 Storyboard the user journey for the top solution(s) One detailed 6-8 panel storyboard per chosen solution
2:15 - 3:30 Rapid prototyping (paper, Figma, or v0/Bolt for digital) One testable low-fi prototype per chosen solution
3:30 - 3:45 Break
3:45 - 4:30 Pre-mortem on the chosen solution(s) Top 5 risks identified with mitigation owners
4:30 - 5:00 Lightning Decision Jam: next 2-week experiments + named owners 1-3 committed experiments with owners and deadlines

Want SuperDupr to Facilitate Your Design Thinking Workshop?

We run one-day and two-day design thinking workshops for product, growth, and brand teams — on-site or remote. You walk in with a problem, you walk out with a prototype, a prioritized backlog, and named owners. Book a discovery call to scope a session for your team.

Book a Workshop Discovery Call →

How to Design a Workshop That Doesn't Devolve Into Talking

The single biggest failure mode in design thinking workshops is the slow drift back into discussion mode — someone makes a point, someone else responds, twenty minutes later the room has produced no artifact and the next activity is delayed. The facilitation discipline that prevents this is mostly mechanical, not charismatic.

  • Time-box every activity, visibly. A countdown timer on the wall (or shared screen) for every exercise. When the timer hits zero, the activity ends. No "let's just take five more minutes" — discipline compounds, slop compounds.
  • No laptops during ideation. Pens, sticky notes, paper. Laptops are slack-checking machines disguised as productivity tools. Allow them only during prototyping (when you need Figma) and synthesis (when you need typing).
  • Every activity produces an artifact. Sketches, maps, sticky-note clusters, prototypes, voted lists. If an activity ends with "great discussion, let's move on," it wasn't an activity — it was a meeting.
  • Separate facilitator and participant roles. The facilitator does not contribute ideas. The facilitator runs time, prompts the room, manages artifacts, and keeps energy up. Mixing the two roles is the #1 reason workshops drift — the facilitator gets pulled into the content and stops facilitating.
  • Silent generation before group discussion. Every ideation step starts with individual silent work, then sharing. This neutralizes the loudest-voice problem and produces meaningfully more diverse outputs.
  • Dot vote before debate. The room votes on options before anyone argues for or against them. This prevents the senior person in the room from anchoring everyone else.
  • End every block with a 30-second "what's the artifact?" check. If you can't name what came out of the last 45 minutes, the next 45 minutes are at risk.

Common Design Thinking Workshop Mistakes

  • Too many participants. Above 10-12 people, every activity slows down by 2-3x and the room loses cohesion. If 25 people need to be involved, run three parallel workshops with 8 each and a synthesis session afterward — don't try to scale a single workshop past its natural ceiling.
  • No clear problem statement. Walking in with "let's talk about onboarding" is not a workshop brief — it's an open-ended meeting invite. The problem statement should be specific enough that a participant could explain it in one sentence to a stranger.
  • Skipping the empathy phase. Teams jump straight to ideation because empathy work feels slow. The result is solutions that solve the team's projected version of the problem, not the user's actual problem. Always invest at least 25% of the workshop in user-grounding activities.
  • Converging too fast. Two ideas after 15 minutes is not divergent thinking — it's the team picking the two ideas they walked in with. Force volume before judgment. Crazy 8s exists for this reason.
  • No follow-through plan. The workshop ends, everyone applauds, and three weeks later nobody can find the photos of the wall. The output of a workshop has to be operationalized within 48 hours — digitized artifacts, owned actions, calendar invites for the next-step meetings.
  • Treating the workshop as the work. A workshop is a forcing function, not a substitute for the actual product development that follows. Teams that treat the workshop as a deliverable in itself produce great walls of sticky notes and zero shipped features.
  • Inviting decision-makers as observers. Executives who attend but don't participate either intimidate the room into self-censorship or distract it with executive opinions injected at the worst moments. Either they're a full participant under the same rules, or they're briefed afterward — not both.
  • Using the wrong room. Conference rooms with one long table and no wall space kill workshop energy. You need wall space for artifacts, floor space for movement, and ideally natural light. A cramped 8-person conference room with a projector is a meeting room, not a workshop space.

How AI Tools Are Reshaping Design Thinking Workshops in 2026

AI has changed design thinking workshops along four axes in the last 18 months, and the workshops that don't adapt feel noticeably slower than the ones that do.

Research synthesis is compressed from days to minutes. Pre-workshop, paste raw research — interview transcripts, support tickets, NPS verbatims, sales-call notes — into ChatGPT or Claude and ask for clustered themes, draft personas, and representative quotes. What used to be a 2-day research-readout deck becomes a 30-minute pre-read. The workshop spends its hours reacting to the synthesis, not waiting for it. We cover the broader pattern of AI-augmented operations in our AI workflow automation solution guide.

Persona generation is a starting point, not the bottleneck. AI-drafted personas from raw research notes give the team something to react to rather than a blank canvas. The workshop's job becomes refinement and challenge — the human judgment work — rather than transcription and clustering. Pair AI personas with real user video clips for the best results: the personas frame the problem, the video clips ground it.

Prototyping moves into the workshop itself. Figma AI, v0, Bolt, and Lovable can take a sketch or a short prompt and produce a working interactive prototype in 5-15 minutes. The team that used to leave a workshop with paper sketches now leaves with a clickable web prototype. The "prototype" step in the agenda has gotten dramatically more powerful — and the gap between workshop output and engineering input has narrowed from weeks to days.

Real-time documentation and follow-through. AI note-takers (Granola, Otter, Fireflies) capture the discussion automatically, transcribe and summarize within minutes of the workshop ending, and flag action items by speaker. Combined with the digitized artifacts from Miro or FigJam, the post-workshop documentation package — usually the part teams skip — is now substantially automated.

Remote-first hybrid workshops are the default. Pre-2024, remote design thinking workshops were a compromise. Now, with Miro AI / FigJam AI handling sticky-note clustering, Granola transcribing in real time, and v0/Bolt producing prototypes on the fly, fully-remote workshops can match in-person quality on most dimensions. The exceptions are hands-on physical prototyping (clay, cardboard) and the social energy that comes from being in a room together — both real, both worth optimizing for when the calendar allows it.

Materials and Tools — What Actually Helps

The workshop tool stack is shorter than the marketing of the tool stack would suggest. Below is what we actually use across in-person and remote sessions.

  • Physical materials (in-person). Post-it Super Sticky notes (the ones that actually stay on the wall), Sharpie fine-point markers in 4-5 colors, dot stickers in 2-3 colors for voting, blank paper for sketching, modeling clay or LEGO bricks for prototyping, painter's tape for impromptu walls, a wall timer or projected countdown.
  • Whiteboard tools (remote / hybrid). Miro, FigJam, Lucidspark, Mural. Pick one and stick with it — the templates and team familiarity matter more than feature parity. Pre-build the workshop board the day before with all activities laid out and timers ready.
  • AI synthesis. ChatGPT (GPT-5 or higher) or Claude (Sonnet 4 or higher) for clustering research notes, drafting personas, and brainstorm expansion. Used pre-workshop, not in place of workshop discussion.
  • Prototyping. Figma + Figma AI for clickable mockups, v0 for AI-generated React components, Bolt or Lovable for fuller working web prototypes, paper for the earliest fidelity.
  • Documentation. Granola, Otter, or Fireflies for transcription and action-item extraction. Notion or Linear for the post-workshop artifact dump and action ownership.
  • Voice recording. If you're capturing user video for the empathy-map activity, a Rode VideoMic Go II or a Zoom H1n with a lavalier handles 95% of the audio you'll ever need. Smartphone video is fine for the visuals.
  • Camera and lighting. One key light, one ambient light, smartphone or DSLR. Don't overthink it — workshop video is for internal use, not a Netflix doc.

How to Measure if a Design Thinking Workshop Was Worth It

The wrong measurement is "did everyone enjoy it." The right measurements are about artifacts produced, decisions made, and outcomes shipped.

  • Tangible artifacts produced. Sketches, maps, prototypes, prioritized lists. Count them. A workshop should produce dozens of artifacts; if it produced one slide deck, it was a meeting.
  • Decisions made. Specific named choices the team committed to during the session. "We will A/B test the simplified onboarding next sprint" is a decision; "we should think about onboarding" is not.
  • Named follow-up actions with owners and deadlines. Every workshop should end with 3-10 actions, each owned by a single named participant, each with a deadline within 14 days. Without this, the workshop is a wall of sticky notes that gets photographed and forgotten.
  • Participant NPS (asked 48 hours later, not in the room). The room-temperature NPS is always inflated. The two-day-later NPS is honest. Aim for 8+ on "would you do this again."
  • Prototypes that survive to production. Track which workshop outputs become real features within 90 days. A healthy hit rate is 40-60% of prototypes shipping in some form within a quarter — much lower and the workshop is generating ideas the team can't execute.
  • Time-to-decision before vs after. If a topic had been debated in Slack for six weeks and was resolved in a 45-minute LDJ, that's measurable workshop value. Track it.
  • Revenue or product KPIs that move 90 days out. The ultimate test. Did activation, retention, conversion, or whichever KPI the workshop targeted actually move? If yes, do more workshops. If no, change the format or the participants — or stop running them and try a different intervention.

Where to Go Next

If you have a problem your team has been spinning on, the right next step is usually a one-day workshop with the structure above — not another meeting, and not a multi-week strategy engagement. If you want SuperDupr to facilitate, book a discovery call and we'll scope a session built around your specific problem. If you'd rather run it in-house, the agenda and activities above are everything you need to get started; pair them with IDEO U's inspiration archive for activity variants and the Nielsen Norman primer as a sanity check.

For related reading: our guides on brand identity development (where design thinking workshops are often the highest-value early activity), scalable online platforms (for the architectural decisions workshops surface), and ecommerce website best practices (for what user-journey storyboards look like applied to real checkout flows). Solutions context: custom web design covers the design side of what workshops produce, and AI workflow automation covers the operational side. Good workshops are forcing functions for good products — but only if you do the work after the room empties.

Frequently Asked Questions

Ready to Implement AI in Your Business?

Book a free strategy session to see how the concepts in this article can work for your specific business.