Conversion over time report, question
I have a question on the "Conversion rate over time" report.
Although Optimize will calculate a winner for you, I still like to analyse the test results for myself and see if I reach the same conclusion as the tool regarding test winner. When I do this analysis, the conversion rate over time is an important factor. If I see that variant 1 has consistently performed at a higher CVR than Original, then that strengthens my belief that variant 1 is better. If I on the other hand see that the curves for CVR for variant1 and Original cross each other many times, then conversely that is a sign that there is no clear winner (no matter what the aggregated CVR for the whole test period is).
When looking in the reports in GA, I get this info very clearly. Conversions / sessions, which gives me CVR. However, for some reason Optimize is not that straightforward when looking at CVR over time. It instead shows a conversion rate interval and a median. This causes the CVR curves in each experiment to be "too perfect" in the sense that you always get two curves that hardly ever cross each other: one is always above the other. This distorts the picture of what the CVR actually was day by day, and thus it distorts the test evaluation.
Can anyone explain to me why Optimize feels a need to present intervals and medians, instead of straight up just showing what the CVR actually was for each day? (so one number, not an interval of numbers)
And yes, I have read the explanation in the support forum, I just don't feel it answers the above question:
"The conversion rate ranges you see in Optimize, which you can see in both the Improvement overview and Objective detail cards, are a range of modeled conversion rates. Optimize is calculating the actual conversion rate for a variant. This value may not yet be represented by a variant’s observed conversion rate, especially early-on in the experiment. You can expect the future conversion rate of a variant to fall into the range you see in Optimize 95 percent of the time. Conversely, the conversion rate metrics you see for the same experiment in the Analytics Content Experiment report are an empirical calculation of Conversions / Experiment Sessions. These two conversion rate values (the modeled conversion rate range in Optimize and the observed conversion rate in Analytics) are expected to be different, and we recommend that you use the values in Optimize for your analysis."