AdWords is now Google Ads. Our new name reflects the full range of advertising options we offer across Search, Display, YouTube, and more. Learn more

Optimize
4.3K members online now
4.3K members online now
Discover how to select who is eligible to be in your experiment, and when they’re served experiment variations.
Guide Me
star_border
Reply

Traffic sources splitting on experiments

Visitor ✭ ✭ ✭
# 1
Visitor ✭ ✭ ✭

Hi,

 

We' re running our first A/B test with Google Optimize and we' d like to ensure that when it splits the traffic it does it equitably at a qualitative level.

 

For example, it's likely on a web site that different traffic sources have very different conversion rates. Thus, to ensure the statistical relevance of the test, it is necessary to ensure not only that the same volume of traffic has passed for each of the variants, but that the same volume of traffic from each of the traffic sources has passed for each of the variants. Is it so?

 

We guess that Google Optimize takes in consideration these kind of things but we'd like to ensure it to be sure that the experiment results trustworthy.

 

Is there any way to do post segmentation on the test to deeply interpret the results? 

 

Tomas

 

 

1 Expert replyverified_user

Traffic sources splitting on experiments

Google Employee
# 2
Google Employee

Hi Tomás

 

There are many details which can sometimes be surprising so I will try to briefly explain how traffic is assigned.

 

Once an experiment has started, a visitor to a page that matches the targeting rules will be randomly assigned to one of the variants.

 

There is no attempt to equally assign traffic between different sources or anything like that - the assignment is completely random.

 

So if there is little traffic for a source, its very possible to look not "equally split" because of the randomness factor. For example if only 2 out of your 1000 visitors come from a source - there is a 50% chance to be both in the same variant (i.e. when you have just 1 variant and the original).

 

Once a visitor has been assigned to the variant, he will "stick" to that until the end of the experiment. This means that if he comes back to the page and the rules are met, he will see the same experience.

 

But furthermore, any conversions or behavior of that user until the end of the experiment is attributed to that variant (even if the user never meets the targeting rules again). For example you may have a rule to match "new visitors", and these visitors will see a different experience when first go to a page. When they come back to that page later, they will not have the same experience (since are not new anymore) but any conversions of metrics will be considered related to their first experience and their sessions will be assigned to that variant.

 

A very easy way that traffic doesn't look equally split (again when a site doesn't have too much traffic) is related with the difference of users and sessions. For example it may be possible that a few very active users are assigned to a variant and produce a lot more traffic than others. For example. one variant has 1 user that came just once but the other a user that came back 10 days - the second will have 10 sessions while the first just 1.

 

About post segmentation on the test: All experiment data are available in Google Analytics reports and you can create GA Segments

(https://support.google.com/analytics/answer/3123951?hl=en) to see the visitors of each variant across the full GA toolset. You need to create a new advanced sequence segment using the experiment and variant dimensions.

Look here for an example:

https://support.google.com/360suite/optimize/answer/7364397?hl=en

 

If you want to do your own statistical analysis you can even access the "raw data" using a big query export, see:

https://support.google.com/analytics/answer/3416092