The Black Mirror Exercise: Building Better by Imagining Worse
Why my design team writes Black Mirror episodes to sharpen our sense of ethics
If you want to skip ahead and try it yourself, here’s the Miroverse template.
When my design team gathers to brainstorm, the room usually fills with energy about what’s possible: new features, smoother flows, better experiences. But sometimes it’s just as important to ask the opposite question: What could go horribly wrong?
To do that, I borrowed a framework from Joshua Mauldin’s Black Mirror Brainstorms, itself inspired by the Cover Story exercise from Gamestorming. We adapted it into our own Black Mirror Exercise, and it’s quickly become one of my favorite tools for surfacing ethical risks.
Why Black Mirror?
The show thrives on taking technology that feels familiar, even ordinary, and then twisting it into something unsettling. It’s a masterclass in showing the unintended consequences of good intentions. As builders of digital products, we need that same skill: to see beyond what we hope will happen and imagine what we really don’t want to happen.
How It Works
We structured the session into four parts, roughly an hour total.
Step 1 — Intro (5 min)
If folks hadn’t seen Black Mirror, we shared a quick synopsis and favorite clips or trailers to set the mood.
Step 2 — Brainstorm (15 min)
Using ethical prompts (like “How might this exploit someone’s financial insecurity?” or “What if this caused severe anxiety or depression?”), each designer jotted down worst-case scenarios tied to real tech we’re working with. Everyone wrote solo for a few minutes before we talked, so quieter voices still had space.
Step 3 — Create Your Episode (20 min)
Teams filled out a Netflix-style template. Each episode needed:
- A title and one-sentence synopsis
- Key plot points (characters, setup, problem, effect)
- Quotes someone might say in the episode
- AI-generated cover art for the episode
That last piece is where it got fun. We used MidJourney/DALL·E to generate these. Prompts like “Cinematic Black Mirror poster, glitch aesthetic, dark moody tones, neon highlights, unsettling futuristic vibe” made the episodes feel shockingly real.
Step 4 — Discuss (20 min)
Everyone presented their “episode” and we rated each one on two scales:
- Likelihood of happening (1–5)
- Impact if it did occur (1–5)
The discussion that followed was where the real value surfaced. Designers challenged each other, spotted blind spots, and highlighted risks we might otherwise miss.
Example Episode: Reflections
One of my favorites:
“People begin using a smart mirror that promises to improve their self-image through subtle AR overlays. Over time, the mirror starts replacing their real memories with idealized versions, until they can no longer tell what’s authentic.”
We mocked up the episode card, complete with poster art, and suddenly the ethical debate wasn’t abstract anymore. It was right there on the screen, daring us to think about how close we already are to that reality.
Why This Works
- Visual storytelling makes risks tangible.
- Ethical prompts push the conversation past clichés.
- Shared shorthand emerges. Weeks later, someone can say “Careful, this is drifting toward Black Mirror territory” and everyone knows what they mean.
- Anti-goals become clearer. Just like Joshua Mauldin described, these sessions help us define what we don’t want our products to do.
Why It Matters
None of us set out to design harmful experiences. But as history keeps showing us, harm doesn’t require bad intent. It only requires a lack of imagination about how things might play out.
Target once accidentally revealed a teen’s pregnancy by sending personalized coupons before she told her family. Facebook once showed a grieving father photos of his deceased daughter framed by confetti in its “Year in Review” feature. Neither team meant to hurt people, yet the harm was real.
Black Mirror Exercises won’t solve every problem, but they give design teams a way to slow down, think differently, and catch the risks before they ship.
Prompts for Ethical Brainstorming
Here’s the full set of prompts we used in our sessions, adapted from Joshua Mauldin’s article and the Cover Story exercise:
Physical Harm
- What would happen if a person used this to harm others?
- What would happen if this exposed someone’s personal information?
- How might this exploit someone’s financial insecurity?
- How might this drive someone to violence?
- What would happen if someone was too distracted because of this?
Emotional Harm
- How might this harm someone psychologically?
- What would happen if people became addicted to this?
- What would happen if this betrayed someone’s trust?
- How might this harm other people’s relationships?
- What would happen if this caused someone severe anxiety or depression?
Societal Harm
- What would happen if this harmed the environment?
- What would happen if this caused political polarization?
- What would happen if this was used to treat people unfairly?
- What would happen if this caused inequality of some kind?
- What would happen if this excluded a group of people?
Additional Lenses
- Economic Issues: Could this enable bad faith actors to prey on vulnerable groups?
- Politics: Could a corrupt government use this info to identify and target dissidents?
- Social Issues: Could this product reinforce already-present societal divides?
By imagining our work as a potential Black Mirror episode, we get sharper at spotting the edges of harm. We also give ourselves a shared language and a set of anti-goals to steer by. It’s not about predicting the future, but about protecting the people who’ll have to live with the products we release.
This exercise resonated so much with my team that I turned it into a published Miroverse template so others can use it too. Here’s a link if you’re interested: https://miro.com/miroverse/black-mirror-exercise/
