Abstract waves
bimodaldistributionsanalytics

Bimodal TTV Distributions and What They Reveal About Your Product

Share:

Most teams only discover bimodal Time-to-Value (TTV) after they’ve already started “fixing onboarding.” Someone notices the median got worse, or a funnel step conversion slipped, and the team reacts: reduce steps, rewrite copy, add nudges. A month later the dashboard looks “healthier” in a few places, yet the lived experience hasn’t changed. Sales still complains that trials stall. CS still sees the same pattern: some accounts get value fast and expand; others never really get there, or they take weeks.

The mistake is subtle: teams treat a split reality as measurement noise.

They assume the distribution is one population with some randomness, so the job is to pull the whole curve left. But when TTV is bimodal, you don’t have “some users fast, some users slow.” You have two different mechanisms generating value. And that usually implies a structural product issue: multiple paths, multiple meanings of “value,” or multiple user types being forced through one flow.

Bimodality is not a chart artifact. It’s the product telling you it’s serving two realities.


What teams usually measure vs what actually matters

What teams usually measure:

  • A single activation rate tied to a proxy (“created first dashboard,” “invited teammate,” “connected data”).
  • A single central tendency for speed (“average days to activate,” sometimes median).
  • A funnel that assumes a canonical, linear path.

What actually matters for TTV:

  • The distribution of time to the first defensible value moment.
  • The shape of that distribution (spread, tails, discontinuities, multi-modality).
  • The conditional story: “Given a user/account with characteristics XX, what is P(TtX)P(T \le t \mid X), and how does it differ across segments?”
  • Whether there are distinct populations that should be treated differently (product, onboarding, packaging, or qualification).

Bimodality is the clearest case where “median TTV” is not just insufficient; it’s misleading. If half your users reach value in hours and half in weeks, the median may sit somewhere in the middle and describe almost nobody.


What bimodal TTV actually looks like (and why CDFs reveal it)

A bimodal distribution means the density has two peaks: two clusters of users reaching value around two different time ranges. In practice, teams often see it first in a cumulative distribution function (CDF), because the CDF shows accumulation over time.

Instead of a smooth S-curve, the CDF often has a fast rise, then a plateau (few users reaching value), then another rise later. That plateau is not “variance.” It’s a gap between two mechanisms.

Bimodal TTV CDF with plateau

Two things are happening in that shape:

  1. A chunk of users reach value quickly (the first steep rise).
  2. Another chunk reaches value much later (the second rise), with a lull in between.

If you only look at median (here, around 4–5 days), you miss the plateau entirely. If you only look at average, you blur the two peaks into a meaningless single number.

Bimodality is the distribution telling you: “There are two different products being experienced.”


Why bimodal TTV happens in B2B SaaS (the non-hand-wavy reasons)

Bimodality doesn’t come from “some users being more motivated.” That explanation is comforting because it implies there’s nothing structurally wrong. In B2B SaaS, bimodality typically comes from one (or more) of these conditions:

1) Two legitimate value paths, but the product pretends there is one

Example pattern: “fast value” users succeed via a lightweight path (manual input, template, sample data, a narrow use case). “slow value” users go through a heavy path (integration, data modeling, approvals). Both are valid, but the product/onboarding treats the heavy path as default.

This creates two peaks:

  • Peak A: users who can get value without dependencies.
  • Peak B: users who need external systems/permissions and therefore stall, then complete later.

The key is not that the heavy path is slow. It’s that users are not sorted into paths intentionally; they self-select through confusion and circumstance.

2) User heterogeneity: different jobs-to-be-done produce different TTV physics

Some products have fundamentally different time constants across segments. A single-seat practitioner use case can yield value in hours. A cross-functional workflow requires alignment, governance, and data quality, and value realistically lands weeks later.

If you call both “activation” and measure them together, bimodality is inevitable. You have mixed two distributions with different underlying processes.

3) Hidden gating steps create a “stuck region”

A bimodal curve often indicates a gating dependency:

  • procurement/security review
  • data connection approval
  • internal champion recruiting other roles
  • admin configuration
  • waiting for a scheduled event (weekly meeting, month-end close)

Users either clear the gate early (fast mode) or they don’t clear it until much later (slow mode). The plateau corresponds to “waiting time,” not “learning time.”

4) False activation creates an artificial early peak

Some teams define value too early because it’s instrumentable. That can create an early spike: users complete the proxy event quickly, yet real value requires later steps. When you measure TTV to that proxy, it looks like “fast value” for many users. When you measure TTV to a defensible value moment, you see the second peak (or you see that the first peak was never value at all).

A diagnostic hint: if the early peak does not correlate with retention, expansion, or repeated usage, it’s likely not value. Bimodality sometimes appears when you fix the value definition and realize you were combining “clicked around” with “benefited.”


Why the mistake persists even in mature teams

Mature teams are not ignorant of distributions. The issue is operational, not educational.

  1. Dashboards demand single numbers. Leadership wants a “TTV” KPI. The path of least resistance is an average or median. Once it’s in a deck, it becomes real.

  2. Funnels encode a worldview. A funnel implies a canonical path. Bimodality is fundamentally “multiple paths.” It doesn’t fit the mental model, so it gets rationalized away as noise.

  3. Experimentation frameworks bias toward local optimization. A/B tests are great at moving a step conversion. They are bad at telling you that half your users are on the wrong journey.

  4. Segmentation feels like storytelling. Teams worry (reasonably) that slicing creates spurious narratives. So they default to aggregates. But bimodality is exactly the scenario where segmentation is not optional; it’s the only way to see what’s true.


Reframing with distribution-based thinking

If TT is time-to-value, a distribution-first approach asks questions like:

  • What are P(T1 day)P(T \le 1\text{ day}), P(T7 days)P(T \le 7\text{ days}), P(T30 days)P(T \le 30\text{ days})?
  • What are the percentiles p50p_{50}, p75p_{75}, p90p_{90}, and what is the gap p90p50p_{90}-p_{50}?
  • Is the density unimodal or multi-modal?
  • Do we observe a plateau in the CDF (low hazard region), implying a gate?

Bimodality is often visible as a large jump in the derivative of the CDF (the density). Another useful lens is the hazard rate: the probability a user reaches value at time tt given they haven’t yet. In discrete form:

h(t)=P(T=tTt)h(t) = P(T = t \mid T \ge t)

A bimodal process tends to show an early hazard spike, then a low-hazard valley (plateau), then another spike. That “valley” is where product work should focus—not because it’s slow, but because it’s structurally stuck.


WATCH: Surface the current reality of bimodality

In the Watch phase, the goal is not to explain. It is to stop lying to yourself with aggregates.

Look at the full distribution and name the shape

Don’t start with “median TTV is 5 days.” Start with:

  • “55% reach value within 6 days, then almost nobody reaches value between day 6–15, then another 35% reach value between day 15–24.”

That statement is actionable because it asserts the existence of two regimes.

Track percentiles as a set, not a single KPI

Bimodality often hides behind a stable median. You’ll see it if you track multiple percentiles:

  • p25p_{25} might improve (fast group getting faster)
  • p75p_{75} might worsen (slow group slipping)
  • p90p_{90} might be unchanged (tail dominated by organizational gates)

This is why “Are we getting faster?” is the wrong question. The right question is: “Which part of the distribution moved?”

Watch cohorts for shape changes, not just shifts

When a team changes onboarding, they often focus on moving the curve left. But bimodality is frequently about mix and routing. A release can increase the proportion of users in the fast peak without changing the slow peak at all—or vice versa.

In Watch, explicitly compare cohort CDFs and ask:

  • Did the plateau move?
  • Did the plateau get longer (worse) or shorter (better)?
  • Did the early rise get steeper (more fast value) but the late rise stay the same (slow value unchanged)?

UNDERSTAND: Explain why there are two peaks

Once you accept that you have two populations or two mechanisms, the work becomes rigorous: identify what separates them.

Start with conditional distributions, not averages

Instead of “Segment A has lower mean TTV,” use the CDF language:

  • Compare P(T3X)P(T \le 3\mid X) and P(T20X)P(T \le 20\mid X) across segments.

This avoids getting trapped in a single-number debate and forces you to see whether a segment is primarily in the early peak, the late peak, or both.

The questions that actually isolate the mechanism

You’re trying to determine whether bimodality is driven by:

  • Different paths (behavioral divergence)
  • Different constraints (org gates)
  • Different value definitions (false activation)
  • Different user types (heterogeneity)

A few high-signal cuts, when done on raw event data with user-level timestamps:

  1. Integration dependency: CDF split by “connected data source within 48h” vs not. If the fast peak is entirely in the “connected early” group, you’ve learned something structural: the dependency is defining the regimes.

  2. Role / permissions: Admins vs non-admins. If non-admins populate the plateau, you’re not dealing with product comprehension; you’re dealing with authority.

  3. Acquisition channel / qualification: Self-serve vs sales-assisted, or “ICP-like” vs not. If one cohort is almost entirely late-peaking, your go-to-market is feeding users a path the product can’t support quickly.

  4. Path clustering: Identify the first 5–10 meaningful events and cluster sequences. Bimodality often corresponds to two canonical sequences that never converge until late (or never).

Distinguish friction vs heterogeneity vs false activation

Bimodality forces you to pick the right diagnosis:

  • Friction: same underlying job, but some users get slowed by avoidable steps. Fixing friction should shrink the plateau and pull the late peak left without changing who is in it.
  • Heterogeneity: different jobs with different time constants. The correct response is separate journeys and separate “value moments,” not one optimized flow.
  • False activation: the early peak isn’t value. The correct response is redefining value, and then reorienting onboarding toward it, even if it makes metrics look worse in the short term.

If you skip this distinction, you’ll ship optimizations that make charts prettier while the underlying split remains.


IMPROVE: Turn bimodality into product decisions (not dashboard commentary)

The Improve phase is not “make the slow users faster” in the abstract. It’s: decide which regime you want to change, and what trade-off you accept.

Decision 1: Should there be one product journey or two?

If bimodality reflects genuine heterogeneity, the right move is to explicitly support two journeys. That can mean:

  • different setup flows
  • different default templates
  • different success criteria
  • different in-product guidance
  • even different packaging or qualification

The strategic implication: you stop optimizing one path to please everyone and instead create predictability for each group. Predictability often matters more than raw speed, because it allows customers to plan and reduces perceived risk.

Decision 2: If there is a gate, do you remove it or route around it?

If the plateau is caused by a dependency (permissions, integrations, approvals), you typically have three levers:

  1. Remove the dependency: create a standalone mode, partial value without integration, mock/sandbox data, or progressive connection.
  2. Accelerate the dependency: better admin workflows, clearer requirements, validation tooling, security documentation embedded in-product.
  3. Route users earlier into the dependency: detect the need immediately and push the task upfront so “waiting time” starts sooner.

The mistake teams make is to polish the middle of the flow while the user is stuck on a gate that sits outside the product’s control.

Decision 3: Do you want speed or certainty?

Bimodal distributions often represent a trade-off between:

  • Speed for the fast group (minimal friction, self-serve)
  • Certainty for the slow group (guided setup, enforced prerequisites)

If you only optimize for speed, you can increase variance: the best users get faster, but everyone else gets more confused. The distribution becomes more bimodal. Many “simplification” projects do exactly this.

A more mature objective is: reduce the mass in the valley and tighten spread, even if it slows the very fastest path slightly. For many B2B products, moving p90p_{90} matters more than shaving hours off p25p_{25}.

Decision 4: Stop using a single value event if value has multiple meanings

If your product legitimately creates value in two ways, you have to model that. Otherwise, your TTV metric will keep oscillating between being too strict (missing real value) and too loose (counting false positives).

You can still keep one TTV metric, but it must be defined as the earliest time the account reaches any defensible value outcome among a set:

T=min(T1,T2,,Tk)T = \min(T_1, T_2, \dots, T_k)

Where each TiT_i corresponds to a specific value outcome with real behavioral proof behind it. If you do this, bimodality can either disappear (because you were previously measuring the wrong thing) or become clearer (because you’re now measuring real value and seeing real divergence). Both outcomes are progress.


A concrete way to reason about “two peaks” without overfitting

When you see bimodality, treat it as a hypothesis: there exists a latent variable ZZ (user type, path type, constraint type) such that:

P(Tt)=zP(TtZ=z)P(Z=z)P(T \le t) = \sum_z P(T \le t \mid Z=z)\,P(Z=z)

Your job is not to invent ZZ from intuition. It’s to find observable proxies that explain most of the separation. You’re looking for cuts where the conditional distributions become unimodal or at least materially simpler.

If after reasonable segmentation the bimodality persists, that’s also a signal: your “value event” might be bundling multiple meanings, or your event model may be missing key state transitions that define when value actually happens.


Closing: what bimodality is trying to tell you

A bimodal TTV distribution is rarely a statistics curiosity. In B2B SaaS, it’s almost always a product truth: you have two journeys, two user types, or two gating regimes—and your current onboarding and measurement approach is flattening them into one storyline.

If you respond with generic optimization (“reduce steps,” “add tooltips,” “improve the funnel”), you’ll make local improvements while preserving the structural split. The fast users will stay fast. The slow users will stay slow. Your median may wobble. Your understanding will not deepen.

The disciplined response is distribution-first: Watch the shape, Understand the mechanisms behind each cluster, then Improve by making explicit decisions about routing, prerequisites, and what “value” operationally means. This is the kind of analysis Tivalio is designed to support: not reporting a single TTV number, but diagnosing why the curve has the shape it does—and what that implies about the product you’re actually shipping.

Share:

Measure what blocks users.

Join the product teams building faster paths to value.

Start free 30-day trial

No credit card required.