Amplitude is a very good event query engine. You can build almost any funnel, any retention curve, any frequency chart, and any custom segmentation on top of a clean event stream. If the question you are asking fits inside "how many users did event X within window Y after event Z," Amplitude will answer it in under a minute.
The problem is that the questions a growth team actually needs answered on Tuesday morning are not event questions. They are distributional, attributional, and diagnostic. They have shapes Amplitude's UI was not designed to produce, and they require stitching several queries together, exporting to a notebook, and running percentile math by hand. The seven questions below are the ones I hear most often from product and growth leads, and they are the seven that Amplitude (and Mixpanel, for that matter) will not answer cleanly out of the box. For each one, there is a specific piece of structured research that does.
The seven questions
1. "Why is my activation rate stuck at X percent?"
This is the meeting-starter. Your activation rate has been sitting between 62 and 65 percent for six weeks. Nothing you ship moves it. You have run four experiments. None of them did anything. Someone asks the question out loud, and the room goes quiet.
Amplitude can tell you the rate. It can break it down by cohort, plan, and channel. It cannot tell you why it is stuck. That answer requires comparing the user attributes of the activated group against the non-activated group, ranking each attribute by its correlation with failure to activate, and reading the ranked list as a diagnostic. Amplitude's Drivers report is close, but it is event-centric, not attribute-centric, and it does not decompose the failure side of the funnel the way a diagnostic research does.
2. "What's slowing down my Time To Value right now?"
You look at your weekly TTV number. It has drifted from 5.1 to 5.8 days over the last month. Nothing obvious changed. You did not ship anything big. The trend line is just sloping in the wrong direction, quietly, and you do not know which lever to pull.
Answering this requires decomposing the distribution shift into its contributing cohorts, identifying which segments moved and by how much, and ranking the contributions. It is not a funnel question. It is a "where did the shape change" question. Amplitude can show you the weekly mean and the weekly funnel. It cannot tell you which cohort owned the shift, because the decomposition has to run across every user attribute in parallel and return a ranked contribution list.
3. "Which user attribute has the biggest impact on TTV?"
This is the one that usually decides the next quarter's roadmap. You have ten candidate user attributes (plan, channel, company size, country, first-session behavior, device type, referral source, industry, team size, signup form variant). You want to know which one explains the most variance in how long users take to reach value, so you can build the next onboarding fix around it.
Amplitude will not rank attributes for you by impact on a continuous outcome variable. It is not what the product is built for. You can pull the data out and run a feature importance analysis in a notebook, and a few teams with a data scientist on staff do exactly that. Most teams do not. They guess, they ship, and they wait to see if the number moved.
4. "Who are my slowest users and what do they have in common?"
The bottom ten percent of your TTV distribution is where the most valuable ten minutes of analyst work lives, and almost nobody runs the analysis. These are the users in the p90 to p100 band: the ones who take five to ten times longer than the median to reach value, and who churn at two to five times the rate of median users. They are not a random sample. They are a population with a shared attribute profile.
Amplitude can filter to a slow user list if you know the threshold. It cannot profile the list against every attribute on the user object at once and surface the shared characteristics automatically. That analysis requires ranking every attribute by its concentration in the bottom decile, which is a specific kind of cross-tab that is not in the standard Insights toolkit.
5. "Is my TTV actually improving over time, or is it just noise?"
Your TTV went from 5.6 days last quarter to 5.3 days this quarter. Is that real? Is it a cohort mix shift? Is it inside the normal week-to-week variance? Before you announce it as a win, you need to know whether the movement is a signal or a rounding artifact.
This is the kind of question that needs a longitudinal view of the full distribution, a statistical comparison of cohorts across time, and an attribution of any movement to either a genuine product improvement or a change in the upstream user mix. Amplitude's trend charts will show you the line. They will not tell you whether the line means anything.
6. "What does my full TTV distribution look like, not just the average?"
This is the most basic question in distributional product analytics, and it is also the one most analytics tools handle worst. You want a histogram of per-user times from signup to value event, across your last thirty days of signups, with the p50, p75, and p95 marked. You do not want the mean. You want the shape.
Amplitude can build a funnel with time-to-complete bucketing, and with enough effort you can produce something that resembles this chart. It is not the same as a research template that computes the full distribution, runs the percentile analysis, and segments the tail automatically. The first is a query you construct every time. The second is a research you re-run in two clicks.
7. "Did the change I shipped last week actually reduce churn for the struggling cohort?"
The hardest question of the seven. You shipped a fix targeted at the slowest cohort three weeks ago. Did it work? Specifically, did it compress the right tail of the distribution for that segment, without accidentally dragging down the median? Did the users who previously landed in the p90 to p100 band move closer to the median, or did they just churn earlier?
This is a distribution-shape question about a specific cohort across a specific time window, compared against a pre-change baseline. It needs the same tooling as question 2, applied in reverse: what changed in the shape, and is the change attributable to the release? Amplitude can tell you the funnel before and after. It cannot do the distributional comparison for you.
The common thread
The seven questions above share one property. They are not event queries. They are distributional, attributional, and diagnostic questions about how your product works for real users, and they require structured, reproducible, auditable research — not another dashboard. The distinction matters, because a dashboard is a query result. A research is a methodology with a fixed, re-runnable shape that returns the same answer for the same data, every time.
Tivalio is built around this distinction. Every research in the library is deterministic, documented, and re-runnable on a schedule. You connect it to your existing Amplitude or Mixpanel data, you pick the question you want to answer, and you get a result that an auditor could re-run six months later and get the same conclusion. That is a different shape of tool than a dashboard. It is also the shape of tool the seven questions above actually need.
If your weekly review keeps circling back to questions Amplitude cannot answer, the issue is not that your analytics tool is broken. It is that you are using an event query engine to answer questions that are not event questions. Bring the right tool to the right question, and most of the meetings where people argue about numbers stop feeling like arguments. The Tivalio product is what that second tool looks like.