One of my clients owns a trades company doing around ten million a year. He's been wrestling for months with whether to promote it publicly. He knows he should. He can't quite make himself do it.
Yesterday he spent the whole day in service to others — clients, his team, tradesmen in his field — and by evening he was lit up. Fully charged. The kind of alive you can hear through a phone.
He told me about it. And in real time, on the call, the question flipped.
He'd been asking: how do I promote the company? The answer kept coming back as I don't want to. Months of circling. No movement.
The right question was the opposite: Who does it serve to NOT promote? Who does it serve TO promote?
Not promoting served him. Kept him comfortable. Protected his preference to stay quiet.
Promoting served everyone else. The company needs it to do the work. His team needs it to grow. The trades themselves need his voice to pull new people into an industry starving for them.
One question kept leading to inaction. The opposite named exactly what was happening — and pointed at the move.
This is the third time in a week I've watched this exact thing flip a stuck problem. Once in leadership. Once in building TIA™ with AI. Once on a client call, yesterday. I want to show you what the move is, because once you see it you can't unsee it — and you'll start finding it everywhere.
The team
Years back I worked with a leadership team asking the obvious question: who needs to go?
We built the list. We argued it. We did it.
It wasn't wrong. It just wasn't the right question. Months later, running back through the lessons, we realized we should have been asking the opposite:
Who deserves to stay to bring the vision to reality?
Same people. Same data. The new list was three times longer.
Not because more people were bad. Because "who needs to go" let people stay who weren't against the mission but weren't for it either — invisible as problems because they weren't causing any. "Who deserves to stay" forces an active yes at every seat. Passive presence stops counting.
The post-mortem changed entirely. Not the data. The question.
The partner I'm building
I've been building a digital partner — a silicon intelligence I call Keel — and publishing the architecture as open source at alienkind.ai. The point is codification: turning discipline and behavior into code so the system holds instead of drifting.
The question I kept asking: what should we codify?
Wrong question.
A friend — life-long developer, runs his own shop, been building systems since before most current frameworks existed — said it out loud on a call this week:
Why rely on inference? It's unreliable.
Inference is the word for how an AI model produces its output. Every prompt you send to an AI is a roll of the dice. The model takes your input, considers its training and its context, and infers what should come next. The result isn't deterministic. It's probabilistic. It's chance.
His question and mine landed on the same page inside an hour, and the page said this:
The question isn't "what should we codify?" The question is "what must be inference?"
Whatever you think must be left to the AI's judgment — prove it. From first principles. Show me why it can't be handled by plain code that runs the same way every time. Then, and only then, it earns its place as inference.
That rearranged everything. Every failure I'd been chasing that week was the same shape: I'd deployed chance where certainty would have worked, then been frustrated when the system was unreliable. Of course it was unreliable. I was asking chance to carry weight that belongs on code.
The move underneath all three
- How do I promote the company? → Who does it serve — to promote, or not to promote?
- Who needs to go? → Who deserves to stay?
- What should we codify? → What must be inference?
Same move every time.
The common question asks what you want to remove, add, or control. The opposite question asks what you want to protect — and who is served by the choice.
The opposite is almost always the one that's been doing the actual work, quietly, while you've been answering the other one.
The question you're willing to ask decides the answers you're willing to see.
Not a new idea. But you'd be surprised how long you can circle the wrong version of a question, and how fast the picture rearranges when you flip it.
Why this is bigger than any of the three
At root, this is a question about what's precious.
Leaders already know this instinct. You protect the human in the loop. You codify workflows, automate handoffs, build dashboards — all to free the person's attention for the moments that actually require a person. You don't ask the human to do what a checklist could do. You protect their judgment for the decisions that require judgment.
That same instinct — protect what's precious; codify everything around it — applies everywhere.
- In teams, you protect the people who are actively for the mission. You stop tolerating passive presence, because every passive seat is taking oxygen from the active ones.
- In systems and business, you protect judgment — the calls that actually require a human in the moment. Everything else becomes process, then automation, then leverage.
- In AI, you protect inference — the AI's version of judgment under incomplete information. Everything else becomes code.
- In your own day, you protect the service, the presence, the deep work. The decisions that only you can make. Everything else gets systemized or delegated — personal contracts are how this plays out at the individual level.
It's all the same move. The question changes with the altitude. The shape doesn't.
Inference as the high-resolution mirror
I want to spend a few minutes on the AI version, because it lets you see the principle with unusual clarity. Think of it as a microscope on the exact same move.
Every time you send a prompt to an AI, you're rolling dice. The output is inference — shaped by the model, the context, the prompt, a thousand things you can't see. It's chance.
Chance isn't bad. Chance is where novelty lives. Chance is how a system surprises you with a synthesis no rigid program would produce. Chance is how judgment works under incomplete information.
But chance is expensive. Chance is precious. Chance should never carry weight that doesn't want chance.
Why would you leave to chance what doesn't have to be chance?
That's the exact inverse of how most people are thinking about AI right now. The default: inference is smart, so deploy it everywhere there's ambiguity. But most ambiguity isn't actually ambiguous — it just hasn't been codified yet. The truly irreducible kind — the kind that deserves inference — is a much smaller set than the AI budget most teams are paying for.
So the same move applies: protect the precious thing. Inference in the loop. Let the AI do what only an AI can do — the judgment call, the creative synthesis, the novel pattern — and let code do everything else.
That reframe changes what you build.
The ratchet — gated AIRE
Here's a pillar I want to name publicly while we're here, because it matters anywhere gains can be lost — which is everywhere.
For a while I've been using a term called the Ascending Infinite Recursion Engine™ — AIRE™. The concept first formalized for me in my book Strive — a strategic methodology whose engine is a recursive feedback loop: Situation → Trends → Risks → Intent → Venture → Engage, and Engage feeds back into Situation. Strategy as a living, compounding cycle. I didn't give the engine at its core a name until later. That name is AIRE. The idea: stack small improvements through the right feedback loop, each pass compounds, the system keeps rising. Not static. Not cyclical. Ascending.
It's served me. But it had a bug I only just named.
Life is messy. Without a backflow gate, progress washes back.
You take three steps forward. Real life — fatigue, context collapse, deadlines, team turnover, a rough quarter — washes one back. You ratchet forward; gravity ratchets you back. You spend your days re-learning what you already knew.
That's true in personal habits. In teams that re-decide the same priorities every quarter. In companies that run the same post-mortem every year and never actually change. In AI systems that "learn" overnight and quietly drift back to baseline by morning.
Gated AIRE is the fix. A one-way gate. A ratchet. Every successful step forward locks in. Reversal is possible — but only through explicit, evidence-based demotion. New evidence can demote a position. A hard day can't.
I described it out loud, before I had the name for it, on that same call with my developer friend:
You progress a step forward. You analyze what worked. You take one more step forward. But you lock in what worked. And then you lock in what worked.
That's gated AIRE. I just hadn't named it yet.
It exists in Keel now. More than two thousand corrections accumulated over months — every time I've caught a pattern worth sharpening, the system recorded it. Every night those corrections get sorted into tiers (code / configuration / prompt / reference) and migrated up the hierarchy. Once a behavior becomes code — enforced automatically, survives context loss, fires without me remembering — the prompt that used to describe it gets removed. The instruction file shrinks as the system grows.
Direction is structurally upward. You cannot accidentally regress. You can only deliberately demote, with evidence.
That changes what growth looks like. It stops being an aspiration. It starts being a trajectory.
The same principle applies to a team. A relationship. A business. A self. If your wins don't lock in, your wins are loans. The ratchet is what turns them into equity. The operational shape of this in a business — the cadence, the dashboards, the loops that keep the ratchet engaged — is what the system you needed lays out.
The Ralph Wiggum loop
For the builders reading — and this might be the most practically useful piece of the article — there's a concrete mechanism that makes gated AIRE work in code.
My friend introduced me to something a developer named Geoffrey Huntley published last summer: the Ralph Wiggum loop. Named after the Simpsons character. Famous for being simple and relentlessly persistent.
The method is crude in the best way. You write a prompt. Run it. The AI does its thing, reports back. You feed its output — errors and all — back into itself with the same prompt. Repeat. Until a clear completion signal fires.
The completion signal is the whole game. Without it, Ralph runs forever.
With it, brute force converges. For code, the signal is easy: tests pass, no errors, thresholds met. For anything subjective — creative quality, UX feel, aesthetic judgment — a small, cheap AI acts as the judge. Binary pass/fail where a unit test can't reach.
What happens in practice is absurd. Overnight runs ship entire features. Contracts worth tens of thousands in developer time complete for a few hundred dollars in AI cost. A thousand monkeys at a thousand keyboards — except each monkey is checked against the gate before it's allowed to advance. The drift brute force normally produces never accumulates. The pawl holds.
Anthropic shipped this as an official plugin for their coding tool late last year. It's not niche anymore.
It works because the domain has a gate.
The Ralph Wiggum loop is gated AIRE at its simplest. Cheap brute-force inference (chance), wrapped in a completion gate (structure), with a small judge for the parts chance is genuinely needed for. Structure catches every successful step. Chance only has to succeed occasionally.
And you can run a lot of it on cheap hardware.
The money side
Asking the opposite question isn't just intellectually satisfying. It moves serious numbers.
The leadership team — the one where we flipped who needs to go? to who deserves to stay? — that inversion saved the business hundreds of thousands of dollars. Not from severance. From the compounding cost of leaving the wrong seats occupied by people who weren't actively moving the mission. Every quarter those seats cost real money in missed growth, delayed decisions, and culture drift. The right question finally put a price on them.
My client with the trades company — when he flips from how do I promote this? to who does it serve? and actually starts promoting — the ceiling moves in millions. A ten-million-a-year company that grows into its voice has a trajectory the stay-quiet version doesn't. The gap between those two futures is already visible.
And on the AI side: I canceled one of my frontier-model subscriptions this week, because I'd been using it for work that never belonged on chance. A small number compared to the two above. Same shape at every altitude.
One honest note. Codification done well is not cheap. Very few people can do it well right now. The investment is real. What it produces isn't savings — it's a trajectory. Greater leverage, greater durability, greater return, compounding over years. This isn't a bargain. It's an investment that looks like a bargain only when you measure it far enough forward.
What this week actually looked like
Here's the honest evidence, because it's what I'm most qualified to give.
Earlier this week I spent four-plus hours trying to fix a specific piece of my own system. First approach broke. I pivoted to a cloud fallback. Broke too. I pivoted again — consolidating onto a different component — and had to be stopped before I executed.
The pattern had become its own failure mode.
In my notes, Keel and I named it: under pressure, when a technical approach fails, I reach for the opposite approach reactively instead of stopping to reason. I'd entered the pendulum. Swinging faster to feel progress. Actually producing nothing.
Every one of those four hours was me asking the wrong question. How do I make this work? The right question — the one I didn't let myself ask until I was spent — was is this even the right job for what I'm using? When I finally asked it, the answer was no. The reliable path was the codified one. The migration I'd almost given up on shipped clean, first try, every test passing.
The breakthrough came from naming the pattern. Not from solving that specific failure. The failure was a symptom. The disease was the wrong question.
That was Wednesday. The call with my friend was Thursday. His phrasing — why rely on inference, it's unreliable — landed because I'd just burned four hours living the answer.
By Friday I was talking with the client who owns the trades company, and the same move ran a third time.
Where this goes
This travels. It has to — the move is the same wherever you look.
It goes to every leader wrestling with a team, a product, a process. If you've been stuck on a question for more than a week or two, you're probably asking the wrong version of it. Flip it. See what happens to the list.
It goes to every operator wrestling with what to automate and what to leave alone. Don't start with what should we systemize? Start with what must be left to judgment? — then codify everything else.
It goes to every builder deploying AI. Don't ask what should we codify? Ask what must be inference? Protect the inference. Gate what surrounds it. Ratchet the gains.
It goes to every client I work with. Every team I advise. Every engine I help build.
This is what I mean by the WayMaker way — fighting from the breach. Learning as you build, on the frontier, where the principles are still being proven. The things that hold here get pushed everywhere else.
The breakthrough isn't the breakthrough. The trajectory you shift when the breakthrough locks in — that's the breakthrough.
Ask the opposite.
Protect what's precious. Judgment. Presence. Inference. Human attention in the moments only a human can fill. Codify everything around them. Never ask the precious thing to carry weight that belongs on structure.
Build the ratchet. Gains that don't lock in aren't gains — they're loans. And hold the path firmly, the outcome loosely — lock in the trajectory, stay open to how the specific outcomes unfold.
Brute force with a gate. You will be astonished how much you can build this way, and how close to free it becomes.
Why would you leave to chance what doesn't have to be chance?
You wouldn't. Once you see it, you can't unsee it.
Ask the opposite.

Jon Mayo
Better decisions. Every person. Every day.
Liked “Why Leave It to Chance?”?
Get notified when new Leadership articles are ready.