Tivalio logoTivalio
Data

What 'good TTV' looks like by company stage

A reference table of healthy Time To Value ranges for SaaS companies, from seed through Series B.

February 28, 20266 min read

Why stage matters

A seed-stage team that is optimizing for a p95 TTV of three days is chasing the wrong goal. Not because three days is bad. At seed stage, the question is not "how fast?" It is "does the product deliver value at all?" You have not earned the right to worry about the tail yet, because you do not even know the median is stable. Tightening the p95 before you have a healthy p50 is polishing the fender on a car that does not start.

The opposite mistake shows up later. A Series C company still reporting its weekly TTV as a single median is pretending it is a Series A company, and it is almost certainly hiding a cohort problem. At scale, the median becomes the least informative of the three percentiles, because the interesting question has moved to "which cohort is stuck" and the median averages away the answer.

The metric that matters changes with stage. So do the target ranges and the questions you should be asking. Stage changes the question, and if you do not change the question with it, you end up optimizing very precisely for something the business stopped needing a year ago.

The reference ranges

Before the table: these are illustrative ranges based on patterns common to product-led SaaS, not a rigorous survey with a citation. Your product is almost certainly going to land outside one or more of these cells, and that is fine. The purpose of the table is to give you a rough shape to compare your distribution against, not a target to hit. If you land outside and the reason is a cohort problem, go fix the cohort problem. If you land outside and the reason is that your product is structurally different (long-horizon enterprise onboarding, cold-start data ingest, manual configuration), the table does not apply to you and you should ignore it.

Stagep50p75p95Distribution shape
Seed1–5 days3–10 days10–25 daysWide, often bimodal, unstable week to week
Series A1–3 days3–7 days8–18 daysTightening, one primary peak, recognizable tail
Series B0.5–2 days2–5 days6–14 daysNarrow peak, identifiable tail cohorts, low drift
Series C+minutes–1 day1–3 days4–10 daysTight, multi-peak by segment, tail is the work

The numbers above are wide on purpose. A seed product with a p50 of four days and a p95 of twenty days has a shape problem, but that is the second-order issue. The first-order issue is whether the product works at all, and the p50 tells you more about that than any other number on the row. "Wide, often bimodal" at seed stage is not a failure. It is what a seed-stage distribution usually looks like when one cluster of users gets it immediately and another cluster is still confused. The job at seed is to understand which cluster is the real one and build toward it, not to force the distribution into the shape a Series B product would have.

What each stage should track

Different stages need different headline metrics. If you only watch one number from each row above, the right one is different at every stage.

Seed: does the product work at all?

Track the p50. The median is the only percentile that tells you whether a typical user is getting value. If your p50 is moving chaotically week to week, your product-market fit is not real yet and no other metric is going to save you. If your p50 is stabilizing, even at a number that looks slow, you have a product to work with. Do not track p95 at seed. The tail is too small and the variance is too high; the number will flap around for reasons that have nothing to do with your product.

Series A: can we make it consistent?

Track the ratio of p95 to p50. At Series A, the question is no longer "does it work?" It is "does it work the same way for everyone?" A ratio of four or less is tight. Six or more is loose, and the users in the tail are almost always concentrated in a specific segment you have not addressed. Series A is where the "fix the onboarding for paid search users on mobile" work lives, and the ratio is how you know there is work to do.

Series B: which cohorts are stuck?

Track cohort-level percentiles, not global ones. By Series B you have enough volume that the global distribution is a blended average across distinct user populations. Decompose it into component cohorts (by plan, channel, company size, first-session behavior) and track each one's p75 independently. The cohort with the worst p75 relative to the rest is almost always your biggest growth lever for next quarter. You do not find it by watching the blended number.

Series C+: can we optimize the tail?

Track p95 and run attribute impact analysis on the users sitting in it. At scale, the median is a solved problem. The revenue still on the table is in the tail, and the tail is made of specific cohorts with specific, fixable attribute profiles. Rank the attributes that concentrate in your bottom decile by their correlation with slow TTV, and work down the list. The longer argument for the three-percentile framework lives in our piece on how most SaaS companies measure TTV wrong.

How to use this table without lying to yourself

The table is a mirror, not a target. It is tempting to see your p75 above the Series A range and declare the onboarding broken. That is the wrong reaction. The correct reaction is to ask whether the users sitting above the range share a common attribute. If they do, that is not a failure against a benchmark. It is a signal about which cohort needs work. Optimizing against a generic benchmark is a known way to destroy a product that was otherwise working for its actual users. The benchmark does not know about your enterprise users. You do.

Use the table to locate yourself on the map, not to decide whether you are lost. If you are outside the range, that is a prompt to segment — not a prompt to panic. The cohort that is dragging your number is the thing worth finding, and it is almost always a specific, named, fixable group rather than the whole product.

The fastest way to run the segmentation is to pull the full per-user TTV distribution for your last thirty days of signups, compute p50/p75/p95 separately for each of your top three cohort dimensions, and compare against the row for your stage. If one cohort is consistently worse, you have your next roadmap item. If all of them are uniformly outside the range, the issue is structural and the benchmark does not apply. Either way, you learn something. The Tivalio product runs this analysis on a schedule against your existing event data, so you do not have to rebuild the pipeline every quarter.

Stage changes the question. The table is a reminder of which question to ask this year, not an argument that your numbers have to match anyone else's. Use it as a mirror, and go find the cohort that owns your tail.

Stop reading dashboards.
Start answering questions.

Connect your data in 5 minutes. See your TTV distribution the same day.

Free forever · No credit card · Cancel anytime