Skip to content
← Blog & Education · risk 8 min read

FAIR vs. stoplights: why your CFO doesn't trust your risk register

If your risk reports use red-yellow-green and your CFO still can't act on them, the problem isn't your CFO. Here's the case for quantified risk — in dollars, with confidence intervals.

By The Talarity team · April 21, 2026

The first time you present a risk register to a CFO, two things happen.

First, the CFO asks how big the problem is. You point at a chart showing four red items and seven yellow ones. The CFO asks how big in dollars. You say something like “well, it depends on the scenario, but if any of the reds materialize, it could be significant.” The CFO’s face does the thing CFO faces do.

Second, the CFO asks what budget you need. You produce a list of remediation projects with vendor quotes attached. The CFO asks how much risk reduction each project buys. You say something like “well, it would mitigate the highest-rated controls in domain X.” The CFO’s face does the thing again.

You leave the room having gotten approval for the cheapest item on the list.

This is the central failure of stoplight-based risk reporting: it can’t be the basis for budget decisions. Stoplights tell a story; budgets need numbers.

The case for FAIR

FAIR (Factor Analysis of Information Risk) is the dominant quantitative risk methodology in cybersecurity. The mechanics aren’t mysterious. You decompose a risk into its primary loss factors:

  • Loss event frequency — how often this thing happens (or could)
  • Probable loss magnitude — when it does happen, what does it cost

Each factor is expressed not as a single number but as a range with a distribution — typically a 90% confidence interval. You don’t say “this risk costs $3M.” You say “this risk has a 90% chance of costing between $1M and $10M, with a most-likely value around $4M.”

You then run a Monte Carlo simulation — typically 10,000 to 100,000 iterations — sampling from your distributions to produce a loss exceedance curve. The curve tells you, for example, that there’s a 10% chance of annual losses exceeding $8M from this risk.

This is the language CFOs already speak. Insurance underwriters speak it. Boards understand it. Your auditor will too.

”But our data isn’t good enough”

This is the most common pushback to quantified risk. It’s also wrong in an interesting way.

When you say “our data isn’t good enough,” you’re usually comparing it to some imagined gold-standard dataset. But the alternative isn’t FAIR with bad data versus FAIR with good data. The alternative is FAIR with bad data versus stoplights with no data.

Stoplights are quantitative claims dressed up as qualitative ones. When you label a risk “high,” you’re implicitly making a numerical claim — and one with even less rigor than a calibrated FAIR estimate from a domain expert. At least with FAIR, the analyst has to commit to a distribution and a 90% interval. The dishonesty of stoplights is hidden; the dishonesty of bad FAIR data is right on the page, where you can argue with it.

There’s also good evidence — Doug Hubbard’s How to Measure Anything is the canonical reference — that calibrated experts produce surprisingly accurate ranged estimates even with what feels like very limited data. The problem isn’t the data. It’s the calibration.

What FAIR doesn’t do

FAIR is a methodology for financial loss estimation, not a complete risk management framework. It tells you how much risk you have. It doesn’t tell you:

  • How to identify which threats matter for your organization (use NIST 800-30 or your favorite threat-modeling discipline for that).
  • Which mitigations to prioritize (FAIR informs the answer; the answer requires policy and risk-tolerance decisions).
  • How to track compliance with regulatory requirements (that’s the compliance program; FAIR feeds into it).

FAIR is the lingua franca for translating cyber risk into business terms. It pairs with everything else; it doesn’t replace anything else.

Practical adoption

If you’re moving from stoplights to FAIR, the smallest viable step is:

  1. Pick your top five risks. Don’t try to convert your whole register at once.
  2. For each risk, identify the primary loss scenarios. Data breach. Ransomware. Insider misuse. Whatever the actual loss event would be.
  3. Calibrate your analysts. A few hours of practice — Hubbard’s calibration exercises are widely used — dramatically improves the quality of ranged estimates.
  4. Run the Monte Carlo. Most modern tools do this for you; if not, a Python script with 10,000 iterations gets you 95% of the way there.
  5. Express the result two ways. “90% confidence the annual loss is between $X and $Y, expected $Z.” And: “10% chance of losing more than $W per year.” Both framings will land with different audiences.

After your top five, you’ll know whether the methodology is worth scaling. Most teams that try it expand to their full register within two quarters.

What you give up

The honest tradeoff: FAIR takes longer than stoplights. Per risk, calibrating analysts and running the Monte Carlo costs a few extra hours up front. The payoff is durable — you’ll spend less time defending your conclusions because your conclusions can be defended.

You also give up the comfort of vague statements. You can’t hide behind “this is a high risk” when the report says “$4.2M annualized expected loss with 90% confidence between $1.1M and $10.7M.” Stakeholders will engage with that number. They’ll question your assumptions. They’ll ask you to update it when conditions change.

That’s a feature, not a bug. The whole point is that the number drives decisions. If nobody pushes back on it, you’re back to stoplights with extra steps.

The boardroom dividend

Here’s what changes when you move to quantified risk.

Your board meeting time on cybersecurity stops being “tell us how scared to be” and starts being “here are three investment options at different ROI levels.” Your CFO stops cutting your budget by 15% as a default and starts asking which projects survive a haircut. Your insurance broker stops giving you generic limits and starts pricing your tower against your actual loss exposure.

Most consequentially: your security investments stop feeling like a bottomless cost center and start looking like portfolio decisions with measurable returns. That’s the difference between being a function the business tolerates and being a function the business invests in.

If you’re running a stoplight-based register today and want to see what FAIR-quantified risk looks like in practice — Talarity’s Risk module ships with FAIR Monte Carlo built in. We’re happy to walk through your top five risks in a 30-minute session.

Loading…

See Talarity in action.

A 30-minute walkthrough or a 7-day trial — your call.