Posterior Multiples for Pricing, Leverage, and Covenants
Letting shrink comps before the covenant debate.
11/3/2025
Most investment committees still open with a comp table: a handful of deals, a simple average, and a narrative about where “the market” sits. Those tables are better than nothing, but they struggle whenever data are sparse, biased, or drawn from the wrong regime. This post offers a friendlier alternative: treat entry and exit multiples as full distributions, let thin sectors borrow strength from thick ones, and read pricing/leverage/covenant decisions off the posterior.
In private markets every meaningful decision—how much to pay, how much to lever, which covenants to accept—depends on two quantities:
- The entry multiple you negotiate today (typically enterprise value divided by the latest twelve months of EBITDA or revenue). Paying 11× on $50M of EBITDA implies a $550M purchase price, which sets your equity check once leverage is chosen.
- The exit multiple you expect to realize later. Exit EV/EBITDA, plus EBITDA growth and deleveraging, drives DPI/TVPI and covenant compliance.
- The spread between entry and exit. A single turn of multiple expansion on $60M of EBITDA adds $60M of value—often the difference between hitting a target IRR and missing it.
Treating those numbers as fixed points forces you to pretend certainty where there isn’t any. Treating them as posterior distributions—with partial pooling across sectors and explicit macro regimes—lets the investment committee talk about pricing corridors, leverage frontiers, and covenant odds using the same underlying model. That is the job of this post.
Thesis: treat multiples as distributions
Classic comp tables pretend each observed multiple is a fact we can average. In reality the data are scarce, biased, and regime-dependent. A hierarchical Bayesian model reframes each comp as a draw from a sector and macro-state distribution:
- Sector-level anchors absorb thin deal histories without collapsing every industry into the same mean.
- Deal-level covariates (size, growth, margins, leverage, process type) explain systematic differences.
- A regime mixture handles credit cycles so exits priced in 2011 do not pollute exits priced in 2022.
Once we operate on distributions, we can talk about pricing corridors (“IRR stays above 18% until you pay 10×”), leverage frontiers (“ICR breach probability crosses 5% above 4× net debt”), and covenant odds with actual posterior probabilities—not gut-feel haircuts.
Posterior exit multiples by sector
Consumer
Median exit EV/EBITDA
9.88×
80% band [6.87, 12.84]×
Energy
Median exit EV/EBITDA
9.91×
80% band [6.63, 12.67]×
Financial Services
Median exit EV/EBITDA
9.41×
80% band [6.66, 12.39]×
Healthcare
Median exit EV/EBITDA
9.66×
80% band [7.21, 12.38]×
Industrial
Median exit EV/EBITDA
10.07×
80% band [7.27, 12.95]×
Technology
Median exit EV/EBITDA
9.96×
80% band [7.32, 12.85]×
Each card highlights the posterior median exit EV/EBITDA and 80% band for a sector. The ridge plot underneath shows the same information as smooth densities—the Technology ridge is narrow and centered near 9.6× because the dataset has depth, while Consumer and Energy shrink toward the global anchor because their historical samples are thinner. This is what “borrowing strength” looks like in practice.
Why point comps fail
Point-estimate comps are brittle for structural reasons:
- Thin samples. Some sector–vintage buckets barely reach observations, so the variance is enormous.
- Selection bias. The deals we hear about are not random draws; weak companies go dark, strong companies transact.
- Survivorship bias. Public comp sets quietly drop delisted names and M&A exits, inflating “market” averages.
- Denominator noise. EBITDA is measured with error, and dividing by a noisy denominator pushes reported multiples upward.
- Cycle effects. Multiples expand and compress with credit spreads, liquidity, and animal spirits—but point tables rarely disclose that state.
Figure: what goes wrong with thin samples. Sampling noise (left), survivorship uplift (middle), and noisy denominators (right) are why we prefer a distributional view over a single comp average.
What goes wrong with thin samples
Sampling noise, survivorship drift, and noisy denominators are the reasons we prefer a hierarchical posterior over point comps.
From left to right: a six-deal comp set swings wildly around the “true” 10× level, survivorship bias inflates averages once weak deals disappear, and noisy EBITDA denominators push reported multiples upward. These pathologies are the motivation for treating multiples as a hierarchy rather than a single number. The next sections walk through the data we need, the hierarchical + regime-aware model we fit, and the decision tools (pricing corridor, leverage frontier, covenant odds) that fall out of the posterior.
Data schema: what we feed the model
The data layer produces one row per deal/comp with everything the model needs:
- Target features . Log revenue, growth, EBITDA margin, capital intensity, leverage, diligence quality, and process type explain systematic differences across companies.
- Context features. Sector , geography, fund vintage , and the macro snapshot (credit spreads, base rates, public EV/EBITDA) tied to the deal date drive pooling and the regime gate.
- Outcome. Realized or marked exit multiples , modeled on the log scale for stability.
- Entry/exit linkage. Entry multiples, net leverage, debt cost, and covenant floors feed the pricing corridor, leverage frontier, and covenant stress sections downstream.
Keeping entry, exit, and macro state together means the PyMC model can learn how multiples evolve through regimes while the decision layer can translate those draws into IRR/DPI/ICR metrics—no more sprinkling “market standard” comps in isolation.
Model: hierarchical shrinkage with a regime mixture
We model log multiples:
- captures sector anchors with partial pooling.
- encodes deal-level controls that travel across sectors.
- adds a regime effect determined by latent state .
4.1 Partial pooling across sectors
Thin sectors shrink toward the global anchor , while data-rich sectors keep distinct levels. The Half-Cauchy prior makes the shrinkage adaptive: wide sectors learn their own mean, narrow sectors lean on the pool.
4.2 Mixture for regime shifts
We introduce a latent regime :
Start with (“easy credit” vs “tight credit”). Later, let depend on macro covariates via a softmax gate to morph the mixture into a mixture-of-experts.
Regime offsets (multiplicative on EV/EBITDA)
Component 0 (easy credit)
Median × 1.00 (80% [0.81, 1.24])
Component 1 (tight credit)
Median × 1.00 (80% [0.80, 1.24])
The gate is deliberately soft: median weight on the easy-credit component is roughly 0.52 when spreads are 150 bps and drifts down to ~0.41 once spreads push past 390 bps. That is enough to encode “credit is tight” without making the mixture jump discontinuously. The histograms show each component shifting exit multiples by ~±25% (roughly 0.6–1.6× multipliers), which is the magnitude we observe in the historical data—meaningful for pricing, but still tame enough that the hierarchy stays stable.
4.4 PyMC implementation
The PyMC workflow includes data prep, model definition, sampling, and JSON export. Full code and sampler settings are tucked into [1] so the main text can stay focused on the modeling story. Implementation details—likelihood choice, priors, mixture stability, pooling depth, etc.—plus the formal specification live in [2].
Turning posteriors into decisions
Posterior draws of plug directly into pricing, leverage, and covenant conversations. Think of the stack this way: (1) entry multiple + EBITDA today tells you the purchase price, (2) posterior exit multiples + forecast EBITDA trace out a distribution of terminal values, and (3) those cash flows roll through your IRR/DPI calculator. The workflow below assumes you already have that cash-flow model so the Bayesian layer can hand it realistic draws instead of single points.
5.1 Pricing corridors
Let the proposed entry multiple be and the posterior for exit multiple be
Push posterior draws through your IRR/DPI engine and pick inside a corridor where
Concretely, suppose the posterior exit multiple for a software roll-up centered on 11× (80% band ) and you are debating whether to pay 9× or 10× on of entry EBITDA. At 9× you write a enterprise-value check and the posterior IRR median clears 20% with only 7% probability of capital loss. At 10× the check jumps to , leverage headroom shrinks, and the posterior puts 14% probability on DPI < 1. The math is the same model, but the interpretation is now “choose the multiple where the probability of ruin stays under policy.”
Figure: Pricing corridor fan. Plot IRR quantiles against with a shaded inadmissible region where . It turns comps into a buy discipline instead of an anecdote.
Read the corridor plot like a stoplight: the teal bands are IRR quantiles as you raise entry EV/EBITDA, and the dotted line is the probability of losing money (DPI < 1). In this dataset the posterior says you can pay up to ~9.5× before the median IRR dips below target, and ruin probability stays under 10% until you print double digits. That is miles more informative than “the average comp is 9.2×.”
5.2 Leverage safety margins
Leverage decisions follow the same playbook: instead of “3.5× feels prudent,” translate net debt into posterior odds of hitting IRR targets and breaching coverage covenants. Let be net debt-to-EBITDA at close. For each posterior draw—complete with revenue growth, margin evolution, and debt cost—we project EBITDA paths and interest coverage:
[3]Figure: Leverage frontier. Plot expected IRR and probability of ICR breach as functions of , and highlight the max leverage that satisfies the breach cap.
The leverage frontier makes the same idea visual for capital structure. The shaded IRR band peaks near 3.5–4.0× net debt; beyond that, incremental leverage doesn’t add expected return but the dotted line (probability of ICR breach) accelerates. This lets you argue for—or against—an aggressive debt package using posterior odds instead of “market standard” anecdotes.
5.3 Covenant stress via exit-multiple tails
Pricing and leverage decisions live in the middle of the posterior; covenant conversations live in the tail. Once you set a downside trigger (e.g., “ICR fails if EV drops 1.5x” or “springing covenant trips if exit EV/EBITDA falls below 7.5×”), you can read the probability straight off the posterior:
In practice we lift the exact same posterior draws used in §5.1–5.2 and compute the cumulative distribution against covenant thresholds. That produces a simple “tail odds” table you can bring to lenders: at the posterior says , so you size the covenant cushion until sits inside the credit policy (say 5%). Tighten the covenant to and falls to ~3%. You can even track how evolves as macro regimes rotate, which is much more actionable than a shrug (“this feels safe”) and traces back to the same hierarchy that set pricing and leverage.
Diagnostics that earn trust
Charts already embedded above double as diagnostics:
- Sector ridge plots show shrinkage working: if a sector were over-pooled you would see a flat ridge glued to the global mean; instead, Technology, Industrial, and Financial Services retain distinct medians that mirror the underlying data.
- Macro fan chart demonstrates the regime layer is contextual: exit-multiple bands widen in 2008–2009 and tighten in mid-cycle vintages, matching lived experience.
- Mixture panel confirms the gate behaves: the “easy credit” histogram concentrates around modest positive offsets while the “tight credit” histogram sits symmetrically below zero; no label-switching headaches.
Two additional visuals make the trust argument explicit:
-
Prior vs posterior overlay. The gray prior is deliberately weak (it allows absurd 150× EV/EBITDA values), while the blue posterior sits exactly where the data do (roughly 8–12×). Seeing the blue band spike inside 10× while the gray band is flat from 0–30× proves the posterior owes everything to the data, not to a smuggled prior.
-
Posterior predictive checks. The small multiples below compare observed EV/EBITDA histograms (orange) to simulations (blue) for each sector. Overlap means the model can reproduce the data it claims to explain; gaps highlight exactly where to iterate next (Energy still has heavier tails, so that is where we add variance structure).
Posterior predictive vs observed (sector × vintage)
Outside the charts we still log , effective sample sizes, and backtests (log score/Brier) in the notebook, but these visuals are what we show stakeholders to prove the hierarchy is calibrated.
Culture change in IC decks
- Replace “The comp average is ” with “Posterior median is (80% band ), and tight-credit regime probability is 35%.”
- Put pricing corridors and leverage frontiers on one page so decisions trace back to posterior odds.
- Report calibration quarterly—average log score, Brier score for downside events, realized vs predicted multiples.
- Keep a one-page appendix on priors, pooling structure, and sensitivity outcomes to demystify the Bayesian layer.
- Make it a mantra: we buy distributions, not points.
Ideas for extending the modeling stack (dynamic intercepts, public–private bridges, selection adjustments) live in [4].
Closing
Point comps are fragile artifacts from a data-poor era. A hierarchical posterior delivers sector- and vintage-aware multiples with uncertainty you can act on. When pricing corridors, leverage frontiers, and covenant stress tests all reference the same posterior, the investment committee sees decisions grounded in distributions—not darts.
- Note [1][back]
The PyMC driver script that loads data, fits the hierarchy, and emits JSON:
The core of
build_modelreflects the hierarchy, covariates, and regime gate described in §4: - Note [2][back]
Implementation notes referenced in §4:
- Likelihood. Work on for stability, convert back via or .
- Priors. Standardize features and use Normal priors on , Half-Cauchy scales on , , and .
- Robustness levers. Swap the Normal likelihood for Student- tails when outliers dominate, add errors-in-variables for noisy EBITDA, and drop in vintage random effects if the mixture feels too coarse.
- Mixture stability. Start with regimes; if chains mix poorly, drop to a vintage random effect, then reintroduce the gate.
- Pooling depth. Add super-sector anchors () when sectors are ultra-sparse.
- Software. PyMC or Stan both work; mixture-of-experts gates sometimes benefit from Pólya–Gamma augmentations.
- Speed vs rigor. Variational inference is fine for exploration, but rerun full NUTS before quoting numbers.
- Note [4][back]
Extensions that slot neatly into the pipeline:
- Dynamic intercepts. Let follow a random walk to capture secular drifts.
- Public–private bridge. Jointly model public comps with a latent discount for private transactions.
- Selection correction. Add a first-stage probit (“deal observed”) and propagate the correction à la Heckman.
- Heteroskedasticity. Allow to grow with leverage or cyclicality.
- Note [3][back]
Interest Coverage Ratio (ICR): EBITDA divided by cash interest expense. An ICR of 2.5× means the company generates 2.5 dollars of EBITDA for every dollar of interest owed, and dropping below covenant floors (e.g., 1.5×) typically triggers lender remedies.