Generalist Epistemics: How to Know Things Across Domains
The problem with being a generalist is that you're always an outsider. You parachute into a new field, try to form views quickly, and then must act—invest, build, advise, decide—without the decade of apprenticeship that real experts have. This is uncomfortable. It should be uncomfortable. The alternative, though, is worse: staying in your lane means missing connections, ceding ground to specialists who can't see the forest, and slowly fossilizing into a hedgehog.
This post is about the intellectual toolkit for productive breadth. Not how to become an expert in everything (you can't), but how to navigate domains you don't fully understand while maintaining honest uncertainty about what you don't know.
The Generalist's Dilemma
Most domains have a power-law structure: a small core of foundational concepts explains most of the variance, while a long tail of specialized knowledge matters only for edge cases. The expert spends years in that tail. The generalist needs to identify the core quickly and know when to call in the cavalry.
This creates two failure modes:
Overconfidence. You read a few papers, pattern-match to something you know, and assume you understand more than you do. Classic example: a software engineer who reads about financial derivatives and thinks "it's just functions on functions"—technically true, but missing the institutional, regulatory, and counterparty considerations that dominate practice.
Underconfidence. You defer entirely to domain experts, even when they're captured by their field's assumptions or can't see outside their training distribution. The expert who's never been wrong in their domain often can't recognize when the domain's rules have changed.
The goal is a middle path: confident enough to act, uncertain enough to update, and calibrated enough to know which category each belief falls into.
Heuristics for Outsiders
When I enter a new field—whether it's a technical domain, an industry, or an intellectual tradition—I run through a loose checklist. Not a formal methodology, just habits that have paid off.
1. Find the canonical disagreement
Every mature field has a central debate that cleaves practitioners into camps. Find it. Understanding what they disagree about tells you more than understanding what they agree on.
In macroeconomics: freshwater vs. saltwater, rules vs. discretion. In machine learning: inductive biases vs. scale, Bayesian vs. frequentist. In philosophy of mind: functionalism vs. phenomenal consciousness. In investing: efficient markets vs. behavioral finance.
The disagreement is the field's frontier. It's where the easy questions have been answered and the hard ones remain. Knowing the debate doesn't make you an expert, but it tells you where expert opinion diverges—which means you shouldn't trust any single expert as if they speak for the whole field.
2. Ask "what would falsify this?"
Experts develop strong priors over years of training. Those priors are usually right, but they can calcify into unfalsifiable positions. The outsider's advantage is fresh eyes.
When someone presents a confident view, I ask: "What evidence would change your mind?" If they can't answer, or if the answer is "nothing realistic," that's a flag. Not that they're wrong—maybe the evidence really is overwhelming—but that their belief has become load-bearing for their identity rather than a live hypothesis.
This is Popperian 101, but it's surprisingly rare in practice. People don't like being asked what would falsify their expertise.
3. Identify who's been right before
Track records matter. Not perfectly—past performance doesn't guarantee future accuracy—but someone who's made correct, non-consensus predictions in a domain has demonstrated something. At minimum, they're not just reciting the modal view.
This is Tetlock's superforecaster insight. The best predictors aren't the deepest experts. They're people who are:
- Actively open-minded (willing to update)
- Comfortable with probabilistic thinking
- Able to synthesize across sources
- Practiced at distinguishing signal from noise
The domain expert knows the most, but the calibrated generalist often predicts better. This seems paradoxical until you realize that prediction is a different skill from knowledge. The expert knows everything about their corner; the forecaster knows how to weight evidence from many corners.
4. Look for where the field borrowed its ideas
Every discipline has intellectual debts. Economics borrowed optimization from physics. Machine learning borrowed backpropagation from control theory. Finance borrowed stochastic calculus from physics (and then convinced itself it invented it).
Understanding the genealogy helps you:
- Recognize when a "novel" idea is old wine in new bottles
- Know where to look for adjacent solutions
- Spot hidden assumptions imported from the parent field
The Black-Scholes model assumes log-normal returns because Brownian motion was the available mathematical tool. That assumption isn't derived from market microstructure—it's imported from physics. Once you see this, you start asking: "What other assumptions were imported rather than earned?"
Finding Load-Bearing Assumptions
Every field rests on assumptions that practitioners take for granted. Some are explicit axioms; most are implicit, absorbed through training rather than stated. The generalist's job is to identify which assumptions are load-bearing—remove them and the whole structure wobbles.
The outside view on inside views
Daniel Kahneman's distinction between inside and outside views is useful here. Experts operate in the inside view: they reason from the specifics of the case, using their detailed model of the domain. The outside view asks: "How often do projects like this succeed?" or "What's the base rate for predictions in this domain?"
The inside view is more informative but less calibrated. Experts are systematically overconfident about their specific predictions while the outside view corrects for that overconfidence.
I try to hold both simultaneously. When a domain expert tells me "this technology will scale because of X, Y, Z," I process their argument and I ask "how often have similar-sounding arguments been right historically?" If I don't have the base rate, that's a flag: I'm operating on inside-view reasoning alone, which is usually overconfident.
What do the critics say?
Every dominant paradigm has critics. Some are cranks; some are prophets. The generalist needs to distinguish them.
Useful critics:
- Engage with the strongest version of the paradigm
- Identify specific failures or blind spots
- Offer alternative explanations that fit the data
- Have been right about things before
Useless critics:
- Attack strawmen
- Substitute rhetoric for argument
- Have no alternative model, just grievances
- Are motivated by group identity rather than truth-seeking
The critics are often wrong—dominant paradigms are dominant for reasons—but they're the fastest way to find the load-bearing assumptions. A good critic says "the whole edifice depends on assumption A, and here's evidence that A is shaky." Even if you ultimately disagree, you now know where the weight rests.
Recognizing Isomorphisms
Here's the superpower that makes generalism worthwhile: pattern recognition across domains. The same structure appears in different clothing across fields. If you can see through the notation to the underlying shape, you can transfer insights instantly.
Some examples I've found valuable:
Shrinkage is everywhere. The James-Stein estimator, ridge regression, hierarchical Bayes, credibility theory in actuarial science—all are the same idea: extreme estimates are usually wrong, so pull toward a sensible anchor. Once you see this, you stop reinventing it in each domain. See Shrinkage Everywhere.
Discounting is everywhere. Bond math, DCF models, delay discounting in psychology, temporal difference learning in reinforcement learning—all involve weighing the future against the present. The math is identical; only the application differs.
Convexity is everywhere. Options pricing, antifragility, venture returns, evolution by natural selection—systems with convex payoffs benefit from volatility. Once you see the shape, you start asking "is this a linear or convex situation?"
Bayesian updating is everywhere. Kalman filters, belief propagation, EM algorithms, Tetlock's forecasting—the same structure of prior × likelihood → posterior, computed in different ways for different problems.
The generalist's edge is seeing these isomorphisms. The expert sees "ridge regression"; the generalist sees "oh, this is the same bias-variance tradeoff I saw in actuarial credibility theory last month." That recognition is worth years of domain-specific training because it imports a whole literature of results.
"This is just X in disguise"
I've started keeping a running list of transformations that unmask hidden isomorphisms:
-
Take the log and it becomes additive. Multiplicative processes (compound returns, population growth, cascade failures) linearize under log-transformation. Many "complex" phenomena are simple in log-space.
-
Condition on the right variable and it becomes independent. Correlated observations often become independent conditional on a latent factor. Factor models, hierarchical Bayes, and graphical models all exploit this.
-
Flip the direction and it becomes the same algorithm. Forward-backward equivalences are everywhere: filtering vs. smoothing, generative vs. discriminative models, encoding vs. decoding.
-
Add a time dimension and it becomes control theory. Static optimization becomes dynamic programming when state evolves. LQR, Kalman filtering, and MPC are just optimization with a time index.
When you encounter something new, asking "what's this isomorphic to?" is often the fastest route to understanding.
Calibrated Uncertainty
Knowing what you don't know is harder than knowing things. My post on calibration covers the mechanics: reliability diagrams, proper scoring rules, Brier decomposition. Here I want to focus on the meta-cognitive aspect.
Three levels of uncertainty
Known unknowns. I know the base rate exists but I don't know it. I can estimate my uncertainty: "I'm 60% confident, but that estimate might be miscalibrated by 10 percentage points." This is the domain of calibration training—practice until your 70% predictions come true 70% of the time.
Unknown unknowns. I don't even know what I don't know. This is where humility comes from. No amount of calibration training helps with the question you didn't think to ask.
Known knowns that are wrong. The most dangerous category. Things I'm confident about that aren't true. These are hardest to detect because confidence masks investigation.
The generalist should hold uncertainty at all three levels:
- Explicit probability estimates for known unknowns
- A general expectation that unknown unknowns exist (intellectual humility)
- Periodic audits of confidently-held beliefs (epistemic hygiene)
Foxes vs. hedgehogs
Philip Tetlock's research on expert political judgment found that foxes (those who know many things and integrate diverse perspectives) consistently outpredict hedgehogs (those who know one big thing and filter everything through it).
The hedgehog has a hammer and sees only nails. The fox has a toolkit and matches tools to problems.
This isn't a personality type—it's a cognitive style that can be practiced. The fox:
- Aggregates across sources rather than trusting any single authority
- Updates incrementally rather than defending a fixed position
- Expresses beliefs probabilistically rather than categorically
- Looks for disconfirming evidence rather than just confirmation
Tetlock's Good Judgment Project showed that calibration, active open-mindedness, and practice predict forecasting accuracy better than domain expertise. The best forecasters were often intelligent generalists, not subject-matter experts.
Building a Trust Network
You can't know everything. You need people you can call—experts who are both deep in their domain and calibrated about its boundaries. This is the trust network.
What to look for
Calibrated experts. They can say "I don't know" and "that's outside my expertise." They give probability estimates, not just confident assertions. They've been right before in ways you could verify.
Intellectual honesty. They'll tell you when you're wrong, not just what you want to hear. They distinguish their views from the field's consensus and flag where they're heterodox.
Adjacent knowledge. They know their field's blind spots because they've looked outside it. The best domain expert to ask about X is often someone who also knows Y and can see X's limitations.
Skin in the game. They've made decisions based on their beliefs, not just published papers. Taleb's point: people who bear the consequences of being wrong develop better calibration than those who don't.
How to cultivate
-
Ask specific, falsifiable questions. Not "what do you think about AI?" but "what's your probability that X achieves Y by Z date?" Specificity reveals calibration.
-
Track their predictions. Did they actually know what they claimed to know? Build your own track record of their track record.
-
Triangulate. Never rely on a single expert, no matter how good. Get multiple perspectives and look for convergence and divergence.
-
Pay attention to their uncertainties. A good expert's "I'm not sure" is often more valuable than a mediocre expert's confidence.
The T-Shape Revisited
The standard advice is to be "T-shaped": deep in one area, broad across many. This is good but incomplete.
Deep enough to have judgment. You need at least one domain where you've done the full apprenticeship—developed intuition, made mistakes, built tacit knowledge. This gives you a visceral sense of what expertise feels like, which calibrates your assessment of others' expertise.
Broad enough to connect. The value of breadth is not knowing a little about everything. It's seeing the structural similarities across domains, importing tools from one field to another, and asking questions that specialists wouldn't think to ask.
Dynamic over time. The T-shape isn't static. You might go deep in domain A for five years, then port that depth to domain B. The depth accumulates and compounds; the breadth expands your toolkit.
My own path: started in mathematics (deep), moved through software engineering (applied depth), into finance (new domain, pattern-matched heavily), now increasingly in decision theory and AI (synthesizing). Each domain imported tools from the previous ones. The math intuition transferred to software; the software tooling transferred to data analysis; the finance frameworks transferred to thinking about AI systems.
The Epistemics of Epistemics
This post has been about how to know things. But there's a recursive question: how confident should I be in these epistemics themselves?
Honestly, moderately. These heuristics have worked for me, but they're not derived from first principles. They're empirical observations about what's helped me navigate unfamiliar territory. Your mileage may vary.
The one thing I'm confident about: calibrated uncertainty beats false confidence. Whether you're a generalist parachuting into new domains or a specialist extending beyond your training distribution, knowing what you don't know is the meta-skill that makes all other skills useful.
The fox doesn't know everything. The fox knows that she doesn't know everything, and acts accordingly.
Connections
- Calibration: Are Your Probabilities Honest? — The mechanics of calibration: reliability diagrams, Brier scores, and recalibration methods.
- The James-Stein Paradox — Why shrinking unrelated estimates toward a common mean reduces total error. The statistical foundation for "extreme estimates are usually wrong."
- Shrinkage Everywhere — The same "don't trust outliers" intuition across James-Stein, ridge regression, hierarchical Bayes, and empirical Bayes.
- We Buy Distributions, Not Deals — Decision-making under uncertainty: pricing corridors, leverage frontiers, and sell/hold as posterior probability.
Further Reading
- Tetlock (2005). Expert Political Judgment. The landmark study showing foxes outpredict hedgehogs. Essential for understanding why calibrated generalists often beat overconfident specialists.
- Tetlock & Gardner (2015). Superforecasting. The practical follow-up: how to actually become a better forecaster.
- Kahneman (2011). Thinking, Fast and Slow. Inside vs. outside views, base rates, and the psychology of expert judgment.
- Silver (2012). The Signal and the Noise. Case studies in forecasting across domains—elections, baseball, weather, earthquakes.
- Taleb (2007). The Black Swan. The limits of prediction, fat tails, and why unknown unknowns dominate.
- Pearl (2018). The Book of Why. Causal reasoning as the foundation of genuine understanding, not just correlation-spotting.
The generalist's path is uncomfortable by design. You're always the outsider, always playing catch-up, always aware of how much you don't know. But the payoff is real: you see connections that specialists miss, you import tools across boundaries, and you maintain the calibrated uncertainty that makes learning possible.
The goal isn't to become an expert in everything. It's to become expert at navigating non-expertise—to know when you know, when you don't, and who to ask when it matters.