The question every growth team asks badly
It is Tuesday morning. You are in the weekly growth review. The VP of Product looks up from her laptop and asks, "How is our time to value these days?" The analyst on the call taps a dashboard, scrolls for three seconds, and says, "Around five days." Everyone nods. Someone writes it down. The meeting moves on to the next slide.
Both the question and the answer are broken, and no one in the room notices.
The question is broken because "how is our TTV" assumes there is a single number that describes it. The answer is broken because "around five days" collapses thousands of real user journeys into an arithmetic mean that tells you nothing about any of them. The VP is satisfied. The analyst is satisfied. The product is still losing forty percent of its new signups before day fourteen, and the meeting has just voted unanimously to ignore that fact for another week.
This is the single most common failure mode in product-led growth measurement. Time to value is the most important metric most SaaS companies track, and it is also the one they get wrong most consistently. Not because they measure it dishonestly, but because they measure it as if it were a temperature reading instead of a shape. Temperature is one number. Shape is many. Treating a distribution like a scalar throws away ninety percent of the information and keeps the ten percent that flatters you.
This post is about how to stop doing that. You will leave with a working definition of TTV, a list of three specific measurement mistakes to audit yourself against, a precise argument for why the full distribution is the whole story, and a concrete checklist you can run against your own product before lunch.
What TTV actually is
Time to value is the elapsed time between a well-defined starting event and a well-defined value event. That is the entire definition. The hard part is not the math. The hard part is the word "well-defined."
The starting event is usually obvious. It is the moment a real human account begins to exist in your system. Signup confirmation. Account creation. First successful login. Pick one, write it down, stop debating it. Consistency matters more than perfection.
The value event is where most teams go soft. A value event is not "user clicks any button." It is the first moment the user does something that your product was actually built to help them do. For a project management tool, it is the first time a user creates a project and assigns a task to a teammate. For an analytics product, it is the first time they run a report that returns a non-empty result. For a CRM, it is the first time they log a deal that later moves out of the initial stage. The rule is simple: the value event is the thing the user came for. If you cannot describe it in one sentence without using the words "engagement" or "interaction," you have not picked a value event yet.
Now contrast this with activation rate, which is the metric TTV usually gets confused with. Activation rate is a percentage: it says "X percent of users activated within seven days." It is binary, threshold-dependent, and hides the shape of everything that matters. Two cohorts with identical ninety percent activation rates can have wildly different user realities if one cohort takes a day and the other takes six and a half. Activation rate sounds precise because it is a number with a percent sign, but it is the least informative way to summarize a user journey. There is a longer argument for why in our piece on activation rate as a vanity metric, but the short version is: activation rate throws away the one thing you cannot afford to throw away, which is when.
Time to value keeps the when. That is the entire point of the metric. Every other property, including the mean, the median, the threshold rate, and the segmentation, is derived from the underlying distribution of individual user times. If you do not have that distribution, you do not have TTV. You have a rumor about TTV.
The three ways companies measure TTV wrong
Averaging the distribution
The first mistake is the averaging mistake. Someone pulls the mean time from signup to first value event, gets 5.2 days, puts it in a weekly report, and calls it done. This is the mistake that produced the opening scene of this post. It feels rigorous because it is a number computed from data. It is not rigorous. It is arithmetic laundering.
Here is the concrete failure. Imagine two cohorts of a thousand users each. Cohort A has users tightly clustered between day two and day eight, with a clean peak at day four and almost nothing past day ten. Cohort B has two groups: six hundred users who reach value in a day, and four hundred users who either take three weeks or never get there. Both cohorts have the same mean of 5.2 days. Cohort A is a healthy product. Cohort B is a product that is quietly burning forty percent of its pipeline. The mean tells you they are identical. They are not identical.
Arbitrary static thresholds
The second mistake is the threshold mistake. Someone decides activation means "value event within seven days" because seven days sounds reasonable. They then track the percentage of users who hit that threshold, week after week, and call it their north star.
The threshold mistake is worse than it looks. The number seven is not based on anything. It was not computed from the distribution of real users. It is a guess. Because it is a guess, it is insensitive to the two things you actually care about: the users who would have activated on day eight anyway and are now classified as failures, and the users who activated on day two but took four weeks to do anything else and are now classified as successes. A threshold metric is brittle in exactly the places a real metric needs to be sharp.
Single-event triggers
The third mistake is the triggers mistake. Someone picks a value event that is convenient to instrument rather than meaningful to the user. "First button click in the app" is a trigger. "First invite sent" is a trigger. Both are easy to fire from your event stream. Neither is value. A user who clicks the "create project" button and then immediately closes the tab has not received value. A user who sends a single invite to a dummy email address they own has not received value. You have logged an event. You have not measured anything.
The triggers mistake is especially dangerous because it is self-reinforcing. Once a team ships a TTV dashboard based on a convenient trigger, they start optimizing for the trigger. Growth experiments move the trigger rate without moving retention. The dashboard goes up and to the right while the business quietly declines. This is the most expensive form of the measurement-wrong family, because it converts the whole growth org into a team that is working very hard on the wrong number.
The distribution is the whole story
Here is the section that matters.
Every user journey through your product has a time. The time from signup to value is a random variable with a real, empirically observable shape. That shape is the only thing worth looking at. Not the mean of it, not a threshold on it, not a trigger summary of it. The shape itself.
This is what a healthy product looks like. A tight curve with a clear peak, no meaningful tail, and almost no users stuck past day ten. Users either get it quickly or they bounce, and the ones who get it are close together. When you see this shape, you are looking at a product whose onboarding, first-run experience, and value prop are aligned. The mean (5.2 days) is a reasonable summary because the distribution is unimodal and symmetric enough to compress into one number without lying.
This is what a broken product looks like. The mean is identical. The shape is not. Forty percent of users reach value inside twenty-four hours. The other sixty percent are scattered across a long, slow tail that extends to twenty-plus days. This is the shape of a product with two audiences: one that instantly understands the value prop, and one that needs help the product is not giving them. The mean says "5.2 days." The shape says "you are losing the second audience." If the only number you report is the mean, you will never know the second audience exists.
The difference between a healthy product and a broken product is not the mean. It is the shape. The mean is identical in both charts above. Every product-led growth team that reports a single TTV number is telling this exact kind of story on itself every week, and most of them do not know it.
p50, p75, p95
If you must reduce the distribution to a small number of numbers, use three of them: the 50th, 75th, and 95th percentiles. These three tell you, together, almost everything the shape was going to tell you.
p50 is the median. It answers the question "what does a typical user experience?" without being skewed by the tail. If p50 is two days in a hypothetical SaaS product, half your users reach value in two days or less. This is the number you can honestly put on a landing page.
p75 catches the slow half. It answers "how bad is it for the users who are not in a hurry?" A p75 of six days means a quarter of your users are still working on it at the end of the first week. This is the number that tells you whether your onboarding holds up for the users who are not already primed to succeed. If p50 and p75 are close together, the product is tight. If they are far apart, the product has a silent drop-off zone between median and third quartile, and you should go find it.
p95 is the churn zone. It is the number nobody wants to look at. p95 is the time it takes for the slowest five percent of your users to reach value, and those users are four times more likely to churn by day thirty than users who reached value before p50. The long right side of the distribution is where revenue quietly leaks out of the product. A p95 of twenty-one days means five percent of everyone who signs up is spending three full weeks trying and failing to find the thing they came for. They do not spend another three weeks. They leave.
If you are only going to watch three numbers, watch these three. The gap between p50 and p95 is the most honest single diagnostic of product-led growth health that exists.
The long tail
The long tail is the biggest hidden lever in product-led growth, and almost nobody pulls it.
Here is the pattern. In a typical PLG product, the slowest ten percent of users are roughly three times slower to reach value than the median user, and four times more likely to churn by day thirty than the users who reach value before p50. Those users almost always have something in common. They signed up from the same acquisition channel, or they belong to the same company size segment, or they are on a browser your onboarding does not quite handle, or they landed on a variant of the first-run experience that an A/B test left running for six months after the experiment ended. The tail is not random. The tail is a signal.
The second fact about the tail is that it hides from the mean. A cohort with a healthy median and a brutal tail will produce a mean that looks merely slightly elevated, and most teams will shrug it off. The tail is a population, not a statistical artifact. Those are real users with real credit cards who decided the product was not for them. Every week you do not segment the tail is a week you are leaving them to churn on schedule.
The three measurements any growth team should actually track are (1) the shape of the full TTV distribution, not the mean; (2) p50, p75, and p95 together, not any one of them alone; and (3) the attributes of the users sitting in the p90–p100 tail, segmented by acquisition channel, plan, company size, and first-session behavior. If you are tracking anything else, you are tracking the wrong thing.
How to actually measure TTV without a data team
The reason most teams measure TTV wrong is not that they do not care. It is that measuring it correctly is annoying. You need to pull per-user timestamps from your event store, compute percentiles, re-segment on multiple attributes, re-run the whole thing every week, and do it without a data engineer dropping a ticket into a backlog. Most growth leads do not have a data engineer. The ones who do have one get four hours of their time per quarter. The math is not hard. The operational weight is what kills it.
The right shape of a TTV measurement tool is simple to describe. It pulls the raw per-user timestamps from wherever your events live. It computes the full distribution, not just the mean. It shows you p50, p75, and p95 in one glance. It lets you re-segment on any attribute in one click, so you can answer "what is p95 TTV for users from paid search on the Team plan" without writing SQL. It recomputes every week without being asked. And it shows you the result in a form a human can look at in thirty seconds, not in a form a data scientist has to explain in a meeting.
Tivalio is built on exactly this model. You connect it to Amplitude or Mixpanel, it computes the full TTV distribution from your real user data, and it runs the percentile analysis and the tail segmentation for you on a schedule. Every number you see is computed, not guessed, and the full methodology is visible on the card. It is not a dashboard and it is not an LLM chat toy. It is a deterministic research layer built for the weekly growth review, for the exact moment when someone asks "how is our TTV" and you are one line away from answering it honestly.
The Watch to Understand to Custom Research loop is the operational piece. Watch detects when your TTV distribution shifts shape, not just when the mean moves. Understand explains which segment is moving it. Custom Research lets you follow the thread into any question the first two did not answer. From alert to answer takes about three minutes instead of the three hours it used to take to open a notebook, pull timestamps, and plot a histogram. If you are a growth lead, this is the difference between running a weekly review on live reality and running one on week-old rumor.
The audit checklist
Run this against your own product tomorrow morning. Each item is a concrete action, not an aspiration.
- Pull the full TTV distribution for your last thirty days of signups and look at the shape. If the only TTV number in your weekly review is a mean, stop the meeting and fix that first.
- Compute p50, p75, and p95 separately. Write them on the same line. Look at the gap between p50 and p95. If it is more than 5x, you have a tail problem and you are not currently managing it.
- Measure p95 TTV for your top three acquisition channels separately. The worst channel is almost always at least 2x the best one, and it is almost always the one you spend the most money on.
- Take the slowest ten percent of users by TTV and segment them by plan, company size, and first-session behavior. Find the attribute that concentrates in that bucket. That is your next onboarding fix.
- Rewrite your definition of the value event in one sentence, without the words "engagement" or "interaction." If you cannot, your TTV metric is measuring a trigger, not value, and everything above has to be redone.
Most SaaS companies measure TTV wrong because the wrong version of the metric is cheaper to produce. The right version is a distribution, three percentiles, and an honest look at the tail. Start there tomorrow, and stop arguing about a single number that was never going to tell you the truth.