Most teams trying to “fix onboarding” are actually trying to fix a narrative.
The narrative goes like this: users drop in our funnel at step X, so step X must be broken. We should remove friction there, add tooltips, or reorder the checklist. The team ships a few improvements, the funnel conversion ticks up, and everyone feels progress—until churn stays flat and expansion doesn’t move. Time-to-Value (TTV) barely improves, or improves for a narrow slice while the long tail gets worse.
This mistake persists even in mature teams because funnels are emotionally satisfying. They offer a single, legible culprit. They feel deterministic. And they map cleanly to “work”: redesign this page, simplify that form, add nudges. Path-to-value problems rarely cooperate. They are not one step failing; they are multiple paths diverging, with different prerequisites, different degrees of optionality, and different failure modes.
If you want to identify broken paths to value using event data, you have to stop treating activation like a staircase and start treating value as a destination reached via competing routes. That means sequence thinking: what users do in what order, with what delays, and which sequences reliably end in value.
What teams usually measure vs what actually matters
What teams usually measure:
A funnel from signup → create project → invite teammate → connect integration → “activated”. Maybe they add time-between-steps averages. Maybe they split by persona.
What actually matters:
The probability that a user reaches real value within a meaningful time bound, conditional on the path they took.
In other words, not “did they complete step 3,” but:
- Which event sequences are associated with reaching value quickly?
- Which sequences are associated with never reaching value?
- Where do successful and unsuccessful users diverge in order and time, not just in counts?
Funnels collapse sequences into “progress,” but many products have parallel and optional steps. A user can skip “invite teammate” and still get value. Another can “connect integration” and still never reach value because the integration was misconfigured, or because their use case required an extra configuration step your product doesn’t guide.
In path terms, funnel steps are just events. The breakage is usually in the transitions: users take a reasonable action and then follow with something that looks plausible but isn’t sufficient—or they fail to discover the necessary next step.
Reframing as distribution-based thinking (and why sequences are the missing layer)
TTV is a distribution. That’s already true even if you only look at “time from signup to value event.” But path analysis is what tells you why the distribution has the shape it has.
Let be the random variable “time-to-value” (in hours or days) for a user who eventually reaches value, and let be the event “user reaches value within 30 days.” The quantities you actually care about are things like:
- — how likely value is, given a path.
- — the CDF of TTV for those who reach value, conditioned on a path.
- Differences between paths: , and differences in percentiles: , .
When teams rely on a single “activation time” average, they’re implicitly assuming the process is unimodal and homogeneous. In reality, the long tail often comes from specific sequences that create dead ends, ambiguity, or hidden prerequisites.
Path analysis is essential because it turns “our p90 got worse” into “users who take path B have a 2× longer p90 and a 30% lower chance of ever reaching value; divergence happens immediately after event X.”
The Watch → Understand → Improve approach to broken paths
The trap is to jump straight to “Improve” with UI tweaks. Path diagnosis is a sequence problem first, an interface problem second.
WATCH: Surface the reality of value paths, not just rates
Start with two distributions, not one:
- The distribution of TTV among those who reach value (CDF + percentiles).
- The distribution of non-value outcomes (users who never reach value within a horizon).
Then layer in path structure.
A simple but powerful view is: for the top N paths (by volume), show:
- (value attainment rate within horizon)
- TTV percentiles among value-reachers: , ,
- share of users taking the path
- where the path tends to terminate (last meaningful event before inactivity)
You’re watching for three patterns:
- A high-volume path with low — likely a broken path.
- A path with acceptable but terrible — a “slow but possible” path (often missing guidance).
- Cohort shifts in path mix — product changes or acquisition changes sending more users onto worse paths.
The key is to treat path as a conditioning variable rather than a descriptive artifact.
This diagram is not about “which line is best.” It’s about making the shape legible: some paths compress uncertainty (steep early CDF), others create long tails (slow climb), and some never converge.
UNDERSTAND: Compare successful vs failed sequences to find divergence points
Once you can see that “Path B is the long tail,” you still don’t know why. The why lives in sequence divergence.
A practical approach:
- Define value precisely (your “value event” or value state), and define a horizon (e.g., 30 days).
- Partition users into:
- Value-reachers (): reached value within horizon
- Non-reachers (): did not reach value within horizon
- Within a time window from signup (say first 24h, 72h, 7d), compare the event sequences.
You are looking for events that are:
- common among but rare among (missing prerequisites or discovery failures), and
- common among but rare among (false trails, dead ends, misinterpretations).
Formally, an event is diagnostic when the conditional probabilities separate:
or the reverse.
But “event occurs” is often too weak. The more revealing signal is transition likelihood: users who do next do (or fail to).
This is where broken paths show up: the product allows a plausible action , but the “right next step” is not discovered, not obvious, or not feasible for certain segments.
Typical broken-path patterns you can see in event sequences
1) False activation sequences.
Users do the “activation” event but then fail to progress toward actual value. Example: they “create dashboard” but never “connect data,” or they “invite teammate” but never perform the action that demonstrates value.
In sequence terms: your activation event is not a gateway; it’s a branch that includes many non-value trajectories.
2) Dead-end transitions.
Users commonly reach an intermediate event (e.g., “integration connected”) and then stop. Successful users follow it with a specific next event (“first successful sync,” “map fields,” “run first query”), while unsuccessful users do not.
This is not primarily a UI issue—it’s a missing state transition or missing affordance that bridges “connected” to “working.”
3) Segment-specific prerequisites.
One segment requires an extra step (security approval, admin permissions, data model setup). If the product path assumes everyone is a self-serve operator, those users will show up as a long tail and low —but only within that cohort.
The path isn’t universally broken. It’s broken for a slice you might be growing into.
This kind of view forces a sharper question than “where do users drop?” It asks: after a meaningful action, what do successful users do next that others do not?
A note on “paths” in B2B SaaS reality
Senior teams often reject path analysis because real usage is messy: accounts have multiple users, events are noisy, and “the path” is not a single ordered list.
That’s true. But it’s also why path analysis is valuable—because it reveals structured mess. The solution isn’t to demand perfect linearity; it’s to define the unit of analysis (user vs account), choose a time window, and accept that you’re comparing probabilistic sequences, not single canonical flows.
Two pragmatic rules:
- Use user-level paths for self-serve products and for early onboarding actions; switch to account-level for multi-stakeholder value (where an admin configures and an end user experiences value).
- Focus on short prefixes (first 3–7 meaningful events) rather than full-session history. Broken paths typically reveal themselves early via missing transitions and dead ends.
Why the mistake persists (even when teams know better)
Even analytically mature organizations fall back to funnels because of incentives and tooling.
- Funnels produce a clean “ownable step.” Path breakage often spans multiple teams (integration, permissions, data model, onboarding), which is politically and operationally harder.
- Dashboards reward stability. Path distributions surface variance—multiple truths at once—which can feel like losing control of the narrative.
- Many analytics stacks make event sequences hard to query rigorously without heavy custom work, so teams default to what’s easy.
The result is a systematic bias toward optimizing what’s measurable rather than diagnosing what’s causal.
Path-based diagnosis flips that: you start from outcomes and work backward through sequences to isolate likely causal structure.
IMPROVE: Turn broken paths into product decisions (not “tips and tricks”)
Once you’ve identified divergence points and broken transitions, the next step isn’t “optimize onboarding copy.” It’s to decide what kind of product problem you have.
Here are the decision patterns that typically follow, expressed as trade-offs a Head of Product can actually commit to.
1) If a path is broken because of missing prerequisites: make prerequisites explicit
If users cannot reach value without a configuration step (permissions, data mapping, schema selection), hiding it behind “Connect integration” is dishonest. It creates a high-volume, low- path that looks like user failure but is actually product ambiguity.
Product decision: introduce an explicit intermediate state and guide users to completion. In event terms, you want to increase:
- and reduce time between them, especially at high percentiles ().
This is often a structural change: new setup state machine, validation, clear “blocked” messaging, and in-product checks that distinguish “connected” from “working.”
2) If a path is slow due to heterogeneity: choose between speed and predictability
If one segment requires more work (enterprise data complexity, multiple integrations), the goal might not be to make everyone fast; it might be to make the process predictable.
Product decision: decide whether you are optimizing median TTV or tail risk. Distribution-wise, you may accept a stable while targeting a reduction in and by introducing guided templates, defaults, or opinionated “happy paths” for specific segments.
3) If “activation” is a false trail: realign what you instrument and what you incentivize
If the common path includes an activation event that doesn’t actually correlate with value, it will attract optimization energy forever. You will keep improving conversion to a moment that is not causal.
Product decision: redefine activation around value-bearing milestones (or value states), and update onboarding goals accordingly. Analytically, your aim is to increase the separation between and early:
The more your early events are predictive of value, the less you rely on downstream lagging indicators to know whether you’re improving.
4) If paths diverge because of discoverability: decide whether to simplify or to guide
Broken paths often look like “users didn’t find the next step.” That can be solved by simplification (remove options) or guidance (make the right next step obvious). These are different philosophies with different costs.
Product decision: for high-stakes transitions, prefer guidance over choice. This usually means opinionated defaults, context-aware prompts, and reducing branching until value is reached. In distribution terms, guidance is often what compresses variance—raising the CDF earlier, not just improving the mean.
Diagnosis before optimization: a concrete way to run this in practice
You don’t need a perfect model. You need a disciplined loop.
WATCH: Start weekly with TTV distributions and path-conditioned summaries. Ask: which path’s CDF shifted? Did the tail thicken? Did the path mix change by cohort?
UNDERSTAND: For the path(s) that worsened, run divergence analysis:
- Compare event prefixes for vs .
- Identify transitions with large conditional gaps.
- Validate with segmentation (role, plan, acquisition channel, integration type) to distinguish friction from heterogeneity.
IMPROVE: Make a product-level call:
- If it’s friction, remove or automate.
- If it’s heterogeneity, separate paths and set expectations.
- If it’s false activation, change what you measure and what you guide.
- If it’s discoverability, guide users through the correct transition and instrument the “working” state.
The output should not be “we need a better checklist.” It should be “the ‘Connect integration’ event is not a stable milestone; the real milestone is ‘first successful sync.’ Our product currently allows users to believe they’re done when they’re not, creating a high-volume dead end. Fixing that requires a new setup state and explicit validation.”
Closing: broken paths are why TTV feels mysterious
When TTV is slow, teams often act as if “onboarding” is a single lever. Event sequences show the opposite: users are taking multiple routes, and some of those routes are quietly failing. Funnels don’t reveal this because they are built to summarize progression, not to diagnose divergence.
Path analysis makes TTV legible as a distribution shaped by competing sequences. It tells you whether you have a speed problem, a tail problem, a segmentation problem, or a measurement problem. More importantly, it forces product decisions that change the structure of the experience rather than cosmetically improving a proxy.
This is the kind of analysis Tivalio is designed to support: starting from raw event data and user-level timestamps, treating TTV as a distribution, and using path-conditioned diagnosis to connect “what’s happening” to “what should we change” without skipping the hard part in the middle.
