Session Quality Score details. How is it calculated?
Recently we have been able to view the totals.sessionQualityDim field in GA/BigQuery. Presumably we have reached the minimal requirements: We have eCommerce implementation, with more than 1000 transactions. We have also waited the 30 or so days for GA to form and train the predictive model for constructing this session quality metric.
The question that I have deals not with the requirements to have the field reported, but more with how the actual metric is constructed. What fields go into training the model? Is it static or adaptive? (i.e. is it assigned to a visitor at the beginning of the session, or, as the session goes along, is the score adaptive?) What number is actually being reported in the totals.sessionQualityDim field? I know that it is a rating of how "quality" a session is, on a scale from 1-100. But is the session quality that GA reports the maximum value attained? The current rating at session close? The probability of conversion after 50 hits on the site?
For example, in an eCommerce setting, if a session ends with a user purchasing a product, what is their session's "quality" value? Theoretically, if the session quality metric is adaptive, it should be 100 if we know that they purchased something. If the session quality metric actually measure how close someone came to transacting, shouldn't someone who actually does transact get a perfect 100? But I have many records of users who purchase a product to finish their session, yet their session quality score is 65 or whatnot. We know that they got so close to transacting that they actually in fact did transact. But somehow they only were rated as a 65 session quality.
This brings us back around to our original question: How is the session quality metric calculated? (Beyond just "by machine learning.") There seems to be more to it than just how close they are to transacting.