Story time. I recently worked with some stakeholders who wanted a full performance report for their department. Standard metrics for the most part, and we followed a pretty standard process. We did the upfront work, met with everyone, asked all the right questions and so forth.
Among the requirements, they asked for a personnel ranking based on sales, which I'm usually cautious about. Rankings often lack context, and can strip away the meaning behind the numbers. But the stakeholders were adamant about it, so we included it in the original scope.
Fast forward to just before launch and one visual kept nagging at me. The ranking. One employee was consistently sitting at the bottom, and I wanted to dig in to that before we shipped it. I did not know this employee, but it just felt off to me that they were constantly at the bottom; the pattern there was just begging to be exposed.
What we found changed the entire narrative.
The Dashboard Was Right. The Frame Was Wrong.
The Frame Problem: How Correct Data Can Tell the Wrong Story
Every metric we build sits inside a frame. And that frame contains a set of assumptions about conditions, about context, and what's being compared and what isn't. And most of the time nobody thinks to question it. Not because anyone is being lazy or careless, but because by the time a dashboard launches, the frame is invisible. It’s just ~the dashboard~.
This is what I call a framing problem. And the tricky thing about frame problems is that they don't look like errors at all. In fact, the numbers and calculations can be perfectly accurate. The problem is the design itself contains an assumption nobody questioned.
What a frame actually is. Think of a frame as everything your metric assumes to be true in order to mean what you think it means. A revenue ranking assumes comparable opportunity. A response time metric assumes comparable ticket complexity. A conversion rate assumes comparable traffic quality. Strip those assumptions away and the number changes shape entirely.
In the story I mentioned earlier, the ranking assumed something that seemed obvious on the surface, that all reps were operating under roughly equal conditions. Same product, same team, same time period. What could be unequal? Well turns out, quite a bit.
How invisible assumptions get in. They usually enter through the question itself. When a stakeholder asks "who are my top performers?" they've already framed the answer as a ranking. Your job in that moment feels like execution, pick the right metric, build the visual, then launch it. But buried in that question is an assumption that performance is something you can isolate from context and compare directly across people.
You can't. Not always. And the analyst who doesn't pause on that, even briefly, is the one who ships a dashboard that accidentally turns an unknown problem into a people problem.
Auditing your own frames. Before you finalize any performance-oriented visual, sit with three questions:
What does this metric assume is equal across everything I'm comparing? Reps, stores, campaigns, products, whatever the unit of analysis is. What conditions are you treating as roughly equivalent that might not be?
What data would change the story this visual tells? If you added one more variable to this view, what would it be? Would it reframe the outliers? Would it explain the bottom of the ranking? If the answer is yes, that variable probably belongs in the analysis.
What question is this visual actually set up to answer and is that the question that matters? The stakeholder asked for a ranking. But what they probably wanted to know was where to focus coaching, hiring, or process improvement. Those aren't always the same question.
None of this requires slowing down delivery or pushing back on every request. It's about ten minutes of deliberate thought before you lock in your visual choices.
Back to the story. When I dug into that ranking, what I found was pretty simple once I looked for it. That employee wasn't underperforming, at all actually. They were just consistently scheduled on the lowest-sales days of the week and the lowest-sales hours of the day. Nobody had flagged it. Nobody had thought to look. The frame we'd been handed didn't have a slot for that information, so it just got left out.
Once we layered in the scheduling data, the entire narrative shifted. What looked like a performance problem was actually a scheduling optimization opportunity. That employee wasn't someone who needed a coaching conversation, they were someone whose schedule needed a second look. Two completely different outcomes. Same underlying data.
The dashboard was right. The frame was just wrong. And if we'd delivered it as scoped, that distinction would have stayed invisible.
🕹️ Trivia
Which tech giant was originally founded as a PhD research project at Stanford University?
A. Amazon
B. Meta
C. Google
D. Oracle
Answer at the bottom of this issue
Interesting Reads (TL;DR)
Simpson’s paradox and segmenting data by Mixpanel Team
Have you ever heard of the Simpson’s Paradox? An enlightening example under the “What is Simpson’s paradox?” section here two doctor’s perform two procedures with two very different success rates, but what does the context tell us? Read more →
When KPIs Drive the Wrong Result by Stacey Barr
Stacey Barr breaks down what happens when a metric is technically being hit but the actual outcome is getting worse. The example she uses is a rail network tracking on-time performance, trains that were running too late started getting cancelled instead, because cancellations didn't count against the KPI. A good reminder that frame problems don't just live in dashboards, they live in any system where a metric is trusted without being questioned. Read more →
Why Data Alone Isn’t Enough: The Case For Contextual Analysis by Team Sigma
The piece breaks down three specific ways context-blind analysis goes wrong, judging performance from a single time period, analyzing metrics without cross-referencing related systems, and celebrating improvements against flawed baselines. Read more →
Resources & Tools
ColorSlurp #productivity
ColorSlurp is the ultimate suite of color tools for designers, developers, and artists. This is the color picker I use when developing color palettes for my data visualization projects.
Litmaps #productivity #research
There's a great deal of students subscribed to this newsletter–this one's for you (and researchers). Next time you need to write a paper check this website out. It will help you find credible, peer reviewed, real literature on whatever topic you are working on. I'm not being dramatic when I say it's saved me DOZENS of hours of research when I was in university.
This Week’s Quick Study
▶️ APIs Explained (in 4 Minutes) by Exponent (4 mins)
Incredible video that explains what an API is. I've been working with APIs for almost a decade and this is probably the best explanation I have ever heard.
CLASSIFIEDS
AFFILIATE
Design visuals your stakeholders actually understand.
Data Visualization: An Audience-First Approach, dashboards and stories that inform, inspire, and drive decisions. PlotStack readers get 50% off with code DATAVIZ50.
FROM THE EDITOR
Free Notion templates built for data professionals.
Trusted by 1,000+ users, download the templates designed to keep your goals, projects, and ideas in perfect sync.
🕹️ Answer
Which tech giant was originally founded as a PhD research project at Stanford University?
A. Amazon
B. Meta
C. Google ✅
D. Oracle
Google began as a research project called "BackRub" by Larry Page and Sergey Brin while they were PhD students at Stanford in 1996. The project aimed to rank web pages based on the number of links pointing to them, a concept that became the foundation of PageRank and eventually the most dominant search engine in history.
How was this week’s issue?
Newsletter publishing is hard work and it’s just me running the show here. If you ever feel like extending a thanks, idea, or insult you can do that here.
Or email me directly at → [email protected].
If you’re feeling generous, I also keep an Amazon Wishlist of books and tools I’m interested in.



