JAN22
THU2026

From Engineer to Operator

Identity in the age of AI: when depth gets commoditized, breadth becomes the moat
aicareeridentitygeneralismmeta-skills

I've been thinking about what happens to identity when the thing you identify with gets automated.

For most of my career, "software engineer" wasn't just my job title---it was a core part of how I understood myself. I built things. I debugged systems. I knew the incantations. The craft was the identity.

Now I'm watching AI systems write code that would have taken me days to produce. Not just boilerplate---real algorithmic work, subtle bug fixes, architectural decisions. And I find myself asking: if the machine can do what I do, what exactly am I?

This isn't a post about whether AI will take jobs. It's about something more personal: what happens to craft identity when the craft gets commoditized?


The old model: depth as moat

The traditional career advice was clear: specialize. Pick a domain, go deep, become the expert. Your depth was your moat---the thing competitors couldn't easily replicate.

This made sense in a world where expertise was hard to acquire and slow to transfer. Want to understand distributed systems? You needed years of building them, failing, debugging 3am outages, reading papers, developing intuition. The knowledge lived in human heads, and those heads were scarce.

The mental model was hydraulic: pour time in, expertise comes out. Ten thousand hours. Deliberate practice. Stack up credentials. The person who went deepest won.

And it worked! I knew people who built careers on knowing one database engine cold, or one programming language, or one corner of the regulatory landscape. Their depth made them invaluable.


The new reality: depth gets commoditized

Here's what changed: AI commoditizes depth faster than you can accumulate it.

The model has read all the papers. It's seen all the Stack Overflow answers. It knows the incantations for every framework, every API, every obscure configuration option. And it's getting better at synthesis---not just retrieval, but combining knowledge in ways that produce novel solutions.

This doesn't mean experts are useless. A senior engineer still beats GPT at debugging subtle production issues, understanding organizational context, knowing which corners to cut. Human judgment still matters.

But the delta is shrinking. The distance between "novice with AI" and "expert without AI" is collapsing. And "expert with AI" often beats "expert without AI" by enough that the old hierarchies don't hold.

I noticed this in my own work. I used to pride myself on knowing obscure Python tricks, on having internalized the behavior of particular libraries. Now that knowledge feels less like a moat and more like... trivia. The machine knows the tricks too. It can look them up faster than I can recall them.

The question becomes: if depth can be rented on demand, what's worth owning?


The operator skillset

I've started thinking about a different bundle of skills---what I'll call the operator skillset. It's the set of capabilities that become more valuable as AI gets better, not less.

Coordination. Getting humans and systems aligned toward outcomes. Understanding who has what authority, what the constraints are, how to sequence work. This is organizational sense-making---and it requires understanding incentives, politics, and communication patterns that don't fit in a context window.

Judgment. Knowing which problems are worth solving. Sensing when something smells wrong, even if you can't articulate why. The instinct that says "this approach will fail in production" or "this metric will get gamed." Judgment is the thing that keeps you from optimizing the wrong objective.

Communication. Translating between domains. Explaining technical constraints to business stakeholders. Conveying business requirements to engineers. Writing docs that people actually read. This is a skill that compounds with breadth---the more domains you understand, the better you can bridge them.

Taste. Knowing what good looks like. This is partly aesthetic (good code, good design, good writing) and partly strategic (good problems, good bets, good tradeoffs). Taste is hard to articulate and harder to automate. When someone says "I know it when I see it," they're describing taste.

Relationships. Earning trust over time. Having a reputation that precedes you. Knowing who to call. This is the ultimate non-fungible asset---you can't rent my relationships.

Notice what these have in common: they're all about operating in the world, not just producing artifacts. They require context that's hard to encode, history that's hard to compress, judgment that's hard to specify.


Breadth as advantage

Here's the counterintuitive part: generalists might be better positioned than specialists.

Not because specialists aren't valuable---they are. But because the game is changing from "who knows X deepest" to "who can connect X to Y and Z."

When I write about shrinkage estimators, I draw connections to bond math, credibility theory in insurance, and hierarchical Bayes. That cross-domain linking is fun for me, but it's also strategic. It's the kind of synthesis that's hard to automate because it requires having lived in multiple worlds.

The person who's worked in finance and tech and operations has seen the same structural patterns manifest differently. They know that "technical debt" in code is the same phenomenon as "operational complexity" in logistics---systems that worked well when they were small but don't scale. They can pattern-match across domains.

This is connectionist thinking: building a lattice of knowledge where each node is connected to many others. The lattice is more valuable than any single node because it enables creative recombination.

I've started to see my scattered interests---statistics, philosophy, powerlifting, private equity, programming languages---not as a lack of focus but as optionality. Each domain is a lens. The more lenses you have, the more angles you can see problems from.


The identity shift

Here's where it gets emotionally complicated.

For years, I thought of myself as "a software engineer." That was the core. Other things orbited it.

But that identity claim is really about what I do, not who I am. And "what I do" is contingent---it changes with circumstances, opportunities, capabilities. The tools change. The problems change. Why would identity stay fixed?

I'm trying to shift from:

"I am an engineer."

to:

"I do engineering when it's the right tool."

This sounds subtle, but it's a big psychological move. The first framing attaches identity to a domain. The second attaches it to... what, exactly? Agency? Utility? Problem-solving?

Maybe the better framing is: I am someone who builds things and understands systems. The implementation details---whether I'm writing code or coordinating a team or analyzing data or designing processes---are secondary to the underlying orientation.

This is like shrinkage applied to identity. Don't over-fit to your current role. Pool information across all the roles you've had. The shrunken estimate of "who you are" is more robust than any single job title.


Practical steps

If this resonates, here are some things I'm trying:

Diversify deliberately. I've started spending time in adjacent domains---not to abandon engineering, but to build bridges. Reading about organizational theory. Learning about finance. Understanding how decisions get made in different contexts. Each domain adds nodes to the lattice.

Practice coordination explicitly. I used to see meetings as overhead, something to minimize. Now I see them as opportunities to practice the operator skillset: understanding constraints, aligning stakeholders, making decisions with incomplete information. The meeting is the work.

Build artifacts that require judgment. Not just code, but documents that synthesize, analyses that require interpretation, proposals that make bets. These are harder to automate because they involve context and opinion.

Invest in relationships. Not networking in the gross sense, but genuine connection with people whose judgment I respect. Having a brain trust to consult. Being the person others call when they're stuck. This is community even for leisure---it turns out it's also community for work.

Use AI as a lever, not a crutch. This is tricky. I want AI to handle the parts where it's better than me (retrieval, boilerplate, first drafts) while I focus on the parts where humans still win (context, judgment, relationships). The risk is that I stop practicing the deep skills and lose them. The opportunity is that I can operate at higher scope.


The emotional work

I'd be lying if I said this transition was purely intellectual. There's grief involved.

I liked being the person who knew the obscure trick. I liked the mastery feeling of writing elegant code. I liked being able to debug something nobody else could figure out.

Those feelings were real, and they mattered. Letting go of them---or at least holding them more loosely---feels like losing a part of myself.

The psychological term for this is identity foreclosure: committing to an identity too early, without exploring alternatives. Most of us did this with our careers. We picked a path and made it part of who we are.

The antidote is what Erikson called identity moratorium: a period of exploration without commitment. Permission to try on different ways of being.

I think AI is forcing many of us into an unplanned moratorium. The identity we foreclosed on is no longer stable. We have to explore whether we like it or not.

That's uncomfortable. It's also an opportunity. How often do you get a socially acceptable reason to reinvent yourself?


Meta-skills and deliberate practice

This connects to something I've been thinking about: what do you practice when the skills themselves might be automated?

The answer, I think, is meta-skills:

  • Learning how to learn new domains quickly
  • Developing judgment about what matters
  • Building communication patterns that scale
  • Cultivating taste across multiple fields
  • Understanding how humans coordinate

These are skills that compound. They transfer across domains. And they're hard to automate because they're about operating in the world, not just processing information.

The deliberate practice for meta-skills looks different than the deliberate practice for technical skills. You're not doing coding katas; you're deliberately putting yourself in situations where you have to coordinate, communicate, make judgment calls. Then you reflect on what worked and what didn't.

This is why I keep writing these posts. Writing is a meta-skill. It forces synthesis, clarifies thinking, builds communication muscles. And it leaves artifacts that can be useful to others.


What remains

After all this, what's left?

I think what remains is something like agency: the capacity to perceive a situation, decide what matters, and act effectively. Agency doesn't depend on any particular skill being un-automatable. It's the meta-capacity that uses skills toward ends.

The AI can write the code, but I decide what code to write and whether it's worth writing. The AI can draft the document, but I decide what the document should say and who it should reach. The AI can generate options, but I choose among them.

This is the operator stance: you're not the one doing the work; you're the one ensuring the work gets done. The shift from hands to eyes, from execution to orchestration.

I'm still early in this transition. Some days it feels like liberation---I can do more, scope farther, move faster. Other days it feels like loss---I miss the craft, the flow, the sense of building something with my own hands.

Both feelings are true. The transition is real. And I think we're all going to be navigating it for a while.


A thought experiment

Imagine you wake up tomorrow and AI is really good. Not incrementally better---qualitatively different. It can do anything you can do, at any skill you have.

What would you still want to do? What would you do because you enjoy it, not because you're paid for it?

That's probably your actual identity. The things you'd do even if they weren't competitive advantages. The things that are you, not just your resume.

For me, I think it's: understanding systems, connecting ideas, building things, helping people navigate complexity. Those are the verbs that feel like me, regardless of whether they're economically valuable.

Maybe that's the move: from identity as noun to identity as verb. Not "I am an engineer" but "I engineer." Not "I am a writer" but "I write." The verb is the action; the noun is just a convenient label.

The labels will change. AI will redefine what's possible and what's valuable. But the verbs---the ways you engage with the world---those can stay constant. They're yours.


This post is part of my ongoing exploration of what AI means for knowledge work. See also: Tool Calling Is Just Function Composition, Shrinkage Everywhere, and my 2025 retrospective where I noticed I'm "getting worse at coding the more I use AI."