This is the first issue of Context Drift. In AI, context drift is what happens when the information a model was trained on no longer matches the world it's operating in. The assumptions still feel right, but the ground moved. I think that's happening to all of us right now, not just models. I've spent most of my career observing how people make decisions, how they change, and how technology quietly rearranges both. I wanted a place to write about that. There's something interesting happening in how we're building right now, and I think it's worth paying attention to. If you like noticing things, you're in the right place.

I asked ChatGPT why my daughter insists on wearing her oversized shiny metallic gold puffer jacket when it's gonna be 80 degrees today.

It gave me a dissertation: thermoregulation, developmental psychology, parenting advice, and seasonal transitions.

What I needed: kids are weird.

I didn't read the answer.

Human attention is a narrow pipe. AI is a firehose.

Most products aim the firehose straight at the pipe and act surprised when something bursts. Ask a simple question, get 500 words, and here's what happens: overload, skimming, then a quiet decision not to bother next time.

The system is often technically helpful and functionally useless.

The point of AI isn't completeness. The point is capability.

So the metrics change:

Time to comprehension. Capability delta. Voluntary depth.

In other words: how fast did they get it? Can they do something now they couldn't do before? Did they choose to go deeper, or did they bail?

Product tips.

Build for eight-second comprehension first, infinite depth second. The depth should be invited, not imposed. The teams that figure out pacing will beat the teams that figure out parameters.

The future isn't infinite answers. It's one true sentence, delivered in time for a human brain to catch it.

I've been thinking about the "quiet decision not to bother" more than anything else I wrote above. Not the overload. The bailing. It's happening in rooms where nobody's naming it, on products we shipped last quarter, with users who used to care. The scary version isn't that AI is too much. The scary version is that people are already learning to tune it out, and the tune-out is quiet enough that the dashboards still look fine. So the question I'm sitting with this week, and the one I'll probably keep coming back to: what are your users already skipping that you haven't noticed yet?

Keep reading