We'd spent months redesigning a search system. The new approach was meaningfully better: it handled typos, matched out-of-order words, and returned more relevant results.

The challenge was demonstrating the value.

We tried explaining the technical differences: data structures, search retrieval algorithms, matching strategies, scoring models. This had one consistent effect: creating more meetings to further discuss. We tried showing one system and then the other. This created more questions than answers, because people couldn't hold both experiences in their heads simultaneously and compare them fairly.

Then, mostly out of desperation and wanting to make my own life easier (people kept asking me to show them the results, and I was tired of running demos), I built a simple side-by-side comparison UI. Nothing fancy. You type a query, and you see what the old system returns on the left and what the new system returns on the right. Same query, same data, two columns. I just wanted to let people experience the difference for themselves instead of me trying to explain it.

This caught fire.

The side-by-side tool became the default way to test and showcase the system. Jaws would drop every time we showed it to someone new. Our CEO saw it and immediately understood. Our consultants started using it in client conversations. Even our customers reacted viscerally when they could see the difference, typing a misspelled product name and watching the new system find it while the old system returned nothing. It unlocked something fundamental in how we communicated and sold this improvement. It's gotten to the point that our customers are starting to request this as part of their standard package and we're thinking about how to make this part of their deployments.

Months of careful technical discussion had produced polite interest and more meetings. A simple comparison tool I built to save myself time produced organizational momentum that reached the CEO. That ratio is absurd. And the more I think about it, the more I notice this same pattern everywhere: the moment something becomes tangible, the conversation changes in ways that abstract discussion never achieves.

It's not just about comparison tools

The obvious reading is "build a demo before your presentation." That's true but too narrow. Tangibility doesn't have to mean working software. A rough diagram, a mockup, even a bad first draft can do the same work. There's a reason people say it's easier to edit a draft than to stare at a blank page: any artifact, however imperfect, gives people something specific to react to instead of debating in the abstract. What I've come to appreciate is that tangibility takes many forms, and all of them seem to have this same disproportionate effect. My examples here lean toward engineering because that's my world, but the principle holds anywhere people are stuck debating abstractions.

Measurements make assumptions tangible. I've watched teams debate performance theories for weeks, with different engineers pointing at different subsystems, each with plausible reasoning. Then someone profiles the system. The numbers show that one component accounts for the vast majority of the problem, and debate ends. A single measurement can kill competing theories in an afternoon and redirect an entire optimization effort toward what actually matters.

Spikes make risks tangible. In architecture discussions, I've seen options survive for weeks on untested assumptions. "We can reuse the existing integration." "That module already handles this case." Then someone spends a day actually tracing through the code and discovers the assumption was wrong. The spike doesn't produce code you ship. It produces knowledge that changes the decision.

Experiments make product assumptions tangible. Teams build endlessly based on abstract assumptions about what customers want, without ever testing whether those assumptions hold. Run small, deliberate tests that validate whether you're solving a real problem before you commit to a full build. I've seen teams spend months building features that nobody used, when a two-week experiment would have surfaced the mismatch early enough to change direction.

Each of these is a different flavor of the same thing: replacing "I think" with "I know" (or "I now know I was wrong"), and doing it as early as possible.

Why the disproportion?

What puzzles me is the magnitude of the effect. It's not that tangible artifacts are slightly more persuasive than abstractions. They're dramatically, unreasonably more effective. A quick comparison UI versus months of technical discussions. A single measurement versus competing theories. A day of code investigation versus weeks of whiteboard debate.

I think there are a few things going on.

Abstractions are infinitely debatable. Tangible things are not. When you present a technical comparison in a slide deck, every person in the room constructs a slightly different mental model of what the improvement actually looks like. Those models diverge on magnitude, relevance, and edge cases. The resulting discussion is really a negotiation between competing mental models, which is why it goes in circles. When you put a tool in someone's hands and let them type their own queries, everyone is looking at the same thing. They're not debating your interpretation. They're forming their own.

Tangibility tests assumptions automatically. Every abstraction contains hidden assumptions. Design documents assume that components will integrate cleanly, that performance will be acceptable, that the team has the skills to implement them. Product roadmaps assume that customers want what you're planning to build. These assumptions survive indefinitely in the abstract, which is precisely how teams fall into shipping feature after feature based on untested beliefs about what creates value. The moment you build something, even something small, the false assumptions surface on their own. You don't need to be clever enough to identify them in advance. The act of making something real does the work for you.

People's brains work differently when they can see something. When reading a proposal, people evaluate. When interacting with a working system, they imagine. "What if we added..." replaces "but what about..." The cognitive mode shifts from critical analysis to creative extension. And creative extension is where momentum lives. When our consultants started using the comparison tool with clients, they weren't just demonstrating our work. They were imagining new ways to position it, new conversations to have, new problems to solve. The tool gave them something to think with, not just something to think about.

Tangibility sends organizational signals. Ideas gain traction in organization by triggering certain signals. A working artifact sends several. It shows someone believed in this enough to build it (action). It makes the idea real enough that others can picture participating (possibility). And it connects to the organization's self-image as a team that ships things (precedent). Abstract concepts, no matter how thorough, send none of these signals with the same force.

When tangibility misleads

There's a flip side, and it's worth being honest about.

Tangible artifacts can be too persuasive. I've seen polished prototypes spark premature excitement about production readiness. Stakeholders see a beautiful demo and assume the hard work is done, when in reality the prototype is optimized for demonstration, not operation. It handles the happy path perfectly but has no error handling, no monitoring, no edge case coverage. "This looks 80% done!" they say, when it's closer to 20%.

The solution isn't to avoid building tangible things. It's to be deliberate about what you're making tangible and why. A rough proof of concept that shows a workflow running end to end is testing whether the architecture holds. A polished UI mockup that skips all error states is testing aesthetics while hiding complexity. A comparison tool is testing whether the improvement is visible to non-technical stakeholders. Each validates something different, and mixing them up leads to bad decisions.

The rule I've settled on: let the prototype be visibly imperfect. My side-by-side tool was rough. No styling, no loading states, no edge case handling. But it did the one thing it needed to do: let people see the difference for themselves. The imperfections didn't matter because they were irrelevant to what was being validated. When your artifact is testing the right hypothesis, polish is a distraction.

The smallest useful tangible

Looking back, I can see that the moments where things actually moved forward share a common feature: someone stopped talking about what might work and built something that did work, even if only partially. The measurement that ended a month of theories. The spike that killed an assumption nobody had questioned. The comparison tool, built out of desperation, that made jaws drop.

None of these were complete solutions. They were the smallest useful tangible things that could exist. But the gap between "could work" and "does work" is where momentum lives, and each one crossed it.

Whatever the mechanism, the pattern is reliable enough to make a difference. When I face a hard problem now, my first instinct isn't to schedule a meeting. It's to ask: what's the smallest thing I can write/draw/build or measure that would make this conversation tangible?

The answer is almost always cheaper and faster than I expect. And its effect on the conversation is almost always larger.