In August 2025 I wrote a thought card about structured data. The argument was simple. Treat your data as a long-lived asset that should be findable by any system, and you build infrastructure that compounds. The card said nothing about agents. I had no idea WebMCP was coming.
Six months later, Google announced WebMCP and the agentic web, the standard protocol for AI agents to act on structured web data. A protocol I’d never heard of, designed for a use case I hadn’t predicted. And the platform I’d been building already fit it. Not because I saw the future. Because the principle didn’t depend on seeing the future.
That’s the small version of the pattern this essay is about. I’ve been writing about adjacent things for ten weeks. Four observations, four different parts of the work. Only when I lined them up did I see they were the same shape. Each one was about something I had built without trying to build it. Each one turned out to be durable in a moment when the rest of the craft was getting cheap.
This essay is about that shape.
The current AI question is what it replaces. That’s the wrong frame. AI is going to commoditize whatever can be commoditized, and the commoditization is not personal. The more useful question is what it can’t. Which parts of the work hold their value when execution gets cheap? Which edges survive when the surrounding craft cheapens?
Four answers. Each from a different altitude. Each with the same mechanism underneath.
Principles
Back to the August 2025 thought card. I didn’t write it because I predicted WebMCP. I wrote it because “touch it once, make it findable by any system” is just good practice. “Computers do the mundane, humans critically think” is just a sound division of labor. These are principles, not predictions.
That’s what good principles do. They’re robust across futures you didn’t predict. Predictions are fragile. They’re right or wrong, and the world rarely cooperates with specifics. Principles are antifragile because they’re grounded in how things actually work, not how you think they’ll play out.
The companies that survive transitions aren’t the ones that saw them coming. They’re the ones whose principles already pointed where things were headed.
Principles get weight by being tested under pressure. The principle that survives a contract dispute, a buyout offer, a public mistake, a quarter where the easy move would have been to compromise it. That’s the one with weight. The rest are preferences dressed up as wisdom.
AI can write a principle. It can’t be wrong about a principle and then keep going.
Operator Leverage
The auction industry is at its veterinary moment. Aging owner-operators with no successor. Fragmented ownership. Private equity buyers with capital ready to underwrite on cash flow. The exact triad that took veterinary EBITDA multiples from 4x to 12-15x in a decade. Demographic inevitability is meeting financial appetite, and the offers are starting to land.
A second wave is running in parallel: the platform layer is being unbundled by AI in real time. A platform stack used to take years and serious capital to replicate, which is what made platform aggregators worth premium multiples. That math is dead. A solo operator with the right tools can stand up a bidder portal, catalog system, and marketing engine that would have been a major project two years ago. The platform vendors paid for yesterday’s moat. The recoup window they bought is compressing.
What you get isn’t a roll-up. It’s a barbell.
At one end, large consolidated houses with operator leverage that AI doesn’t replicate. National business development pipeline. Brand built over decades. Bidder lists in the hundreds of thousands. Contractor networks. Compliance infrastructure that took years to assemble. At the other end, AI-empowered regional and solo operators who used to need a platform vendor and now don’t. The squeezed middle is the traditional mid-tier firm with neither scale nor software self-sufficiency. That’s the cohort that gets repriced.
Operator leverage accumulates over decades of showing up. The bidder list isn’t a database. It’s a record of every auction someone bought from us. The contractor network isn’t a vendor list. It’s every removal someone delivered on time. The brand isn’t a logo. It’s the sum of decisions people remember us making, the calls we made when they were watching.
AI can run the auction. It can’t have been the one who built the trust that made the contractor pick up the phone.
Doctrine
Three high-stakes decisions in three weeks recently routed through the same AI strategy pipeline I built into my own tooling. A bid retraction notification. A first-of-its-kind hire. A grievance screening with no precedent. Different stakes, same mechanic. And the strategy note from the third said it out loud: “the letter itself serves as the v0.1 screening rationale precedent.”
Not the framework that gets written later. Not a separate doctrine document with a deadline. The disposition letter is the precedent. The decision and the doctrine are the same artifact.
This is what the pipeline does. Board phase produces a panel-by-panel argument. Simulate produces a 90-day scenario. Council produces named dissent. Strategy produces reconciliation. By the time the call gets made, there’s a written argument trail that explains why, not just what. The next decision in the same neighborhood doesn’t start from zero. It starts from “see how we handled the last one.”
What’s emerging is starting to design the artifact on the way in. The disposition letter has a different reader than the obvious one. The complainant gets the answer. The future complainant gets the framework. That changes what the letter says, and how it’s structured, and which test gets foregrounded.
Doctrine accumulates by surviving real complaints. A documented framework that came out of an actual decision has weight that a policy written in the abstract never has. It survived a real edge case. It got read, tested, cited. The next person making a similar call doesn’t get a memo titled “How We Handle X.” They get the actual letter and the actual reconciliation, and they read it the way lawyers read case law: what did the deciders do when they were under pressure.
AI can draft a screening framework. It can’t be the deciders who were under pressure.
Sacrifice
The framing comes from a Gary Vaynerchuk bit I wrote about yesterday. 1-percenter mode runs four dials, not one. Hustle, grind, output, sacrifice. The first three get all the airtime. They’re measurable, performable, brag-able. Productivity tools have made each one observable in real time. Public hustle culture has commoditized the performance of effort.
The fourth dial is the one nobody else runs.
Sacrifice accumulates as life shape. The cumulative subtraction of what you didn’t do. The other career you didn’t take. The hobby you stopped having time for. The version of yourself you didn’t get to develop. It doesn’t show up on a balance sheet or a feature comparison. It shows up in the gap between what someone built and what they could have built if they’d done other things instead.
AI doesn’t have an alternative life it gave up.
Calibration
Each of these edges has the same failure mode. The thing that made it durable also contains its risk.
Principles harden into walls you can’t see. Doctrine starts optimizing for the future reader and stops answering the person it was written to. Operator leverage becomes the institutional bloat the next generation has to clear out. Sacrifice eats the version of you the 1% life was supposed to produce.
The edges are durable. The risks are real. The work is reading both, not betting on one.
AI eats execution. It won’t eat the critical thinking underneath.