Hello Reader, A few weeks ago, I was about to simplify an onboarding step in my app that felt clunky—an extra screen no one needed. I was ready to cut it to speed things up. But before I scrapped it, I paused. Why was it there in the first place? Turns out that step wasn’t just friction. It included an authentication check that helped prevent drop-off and improved conversions. Removing it would have quietly broken the system. That’s Chesterton’s Fence in action: don’t remove a fence until you know why it was built. Chesterton’s Fence is a mental model that warns against tearing down rules, systems, or constraints just because we don’t immediately see their purpose. It reminds us to investigate first, to understand what problem the system was solving, before assuming it’s safe to discard. I’ve loaded 100 of the most powerful mental models into Re:Mind, a pocket-sized toolkit for better thinking. We successfully wrapped up our Kickstarter campaign with $8.5K in pledges. If you missed the campaign, late pledges are still open. Why Use ItIt’s tempting to clean up, streamline, or cut steps we don’t immediately understand. But what looks unnecessary might be protecting something you haven’t seen yet. Here’s what Chesterton’s Fence can help you do:
When to Use ItThis model shines in moments when you’re ready to “optimize” without asking why something exists. It’s a critical check before pruning policies, workflows, or traditions. These are the moments where Chesterton’s Fence matters most:
How to Use ItIn Chernobyl (the miniseries), the disaster unfolds because operators disabled critical safety systems during a test they believed was routine. They assumed the safeguards were obstacles to efficiency, but they didn’t fully understand why those protocols were in place. When they pushed the reactor beyond safe limits, the systems they had bypassed could no longer protect them. The AZ-5 shutdown button, the ultimate fallback, failed in ways no one expected because they misunderstood the complex design. Chesterton’s Fence warns us: just because a safeguard seems unnecessary doesn’t mean it is. The operators assumed they understood the system well enough to override it, but they skipped the hard step of investigating its original design and hidden dependencies. It’s a perfect example of why systems that seem redundant might exist for reasons that are no longer visible but still vital. Here’s how to apply it:
Next StepsThe next time you’re about to scrap a step, rule, or system, pause. Ask: “Why was this built?” Don’t tear it down until you know what it’s protecting. Where It Came FromThe principle comes from G.K. Chesterton’s writing in the early 1900s. His insight: if you don’t see the use of a fence, don’t remove it until you understand why it was built. Modern thinkers use it as a safeguard against false simplification, especially in complex systems like organizations, policies, and software. Until next time, keep exploring and questioning. Your unique perspective is your greatest asset. Think Independently, JC Share or Join 👉
|
Re:Mind is a weekly newsletter exploring mental models and frameworks that help you think clearly and make better decisions. Each week, I share practical insights and tools that transform complex ideas into wisdom you can apply immediately. Join me in making better decisions, together.
Hello Reader, A few weeks ago, I blocked off a Saturday to work on some core updates to the Re:Mind deck. No meetings. No email. Just headphones and focus. And I got more done in four hours than I had in the previous four days. That’s the power of Deep Work—extended periods of distraction-free focus where real progress gets made. Deep Work creates the conditions for clarity, flow, and original thinking. When we protect our attention, we recover our best ideas. I’ve loaded 100 of the most...
Hello Reader, Earlier this year, I launched an ad campaign and watched clickthrough rates rise—but conversions didn’t budge. My gut said it was working. The data said otherwise. That tension between intuition and analysis? That’s the heart of fast and slow thinking. Fast and Slow Thinking is the dual-system model of decision-making introduced by Daniel Kahneman. System 1 is fast, intuitive, and automatic. It helps us recognize faces, finish sentences, and react to threats. System 2 is slow,...
Hello Reader, After wrapping up the Kickstarter campaign for Re:Mind, I met with my personal board of directors to discuss what should come next. We’re moving into production for the deck and planning updates to the app, but the path beyond that was foggy. I had been focused on funnels, lead magnets, and tactical marketing—but they gently redirected me to the harder truth. The signals were promising, they said, but it was time to ask the deeper question: who is this really for? Defining my...