Chapter 5 of 12:
Hi! Welcome back, stranger!
Another crazy-busy, super-productive, information-jam-packed week is in the books. Five down, seven to go and these CXL courses continue to humble me as they continue to shatter my painfully under-qualified overestimation of my understanding of certain Digital Analytics subjects. But, I suppose that’s just the Continuing Education Paradox, which for those of my adoring readers that are unfamiliar with this phenomenon, the Continuing Education Paradox is that there is a point in one’s pursuit of learning where you begin to realize more and more that the more you learn, the more you discover how very little you actually know. I encountered this startling realization somewhere in the junior year of my undergraduate degree, and the effects have yet to wear off.
There has been so much that I have loved about this quest that I started on a whim a little over six weeks ago when I stumbled across CXL’s website, casually thumbed through their courses and mini-degree programs, and decided to throw caution to the wind and apply for their scholarship program on the spot. One of those things that I have really appreciated that I want to discuss in this post is the various strategic models and frameworks they provide in conducting Jedi-level data tracking, analysis, and visualization. I mentioned a few of those in my previous post, such as the A.C.E. (Awareness, Conversion, Engagement) framework for setting goals in Google Analytics, or the Q.I.A. (Question, Information, Action) model for prioritizing how to approach your GA analysis, but the latest models that I’m super jazzed about are the:
- P.I.E. Model
- I.C.E. Model
These two models presented in one of the two newly added courses(?) in the Digital Analytics mini-degree, A/B — Testing Prioritization where CXL’s founder Peep Laja discusses his strategies and best practices in conducting Optimization tests.
The P.I.E. Model provides a framework to prioritize your testing by measuring the Potential, Importance, and Ease (on a scale from one to ten) of a test or tests you are considering running and then indicating priority by taking the average of each test’s scores. It looks something like this:
It’s fairly straightforward in its evaluation of which projects to prioritize, you base that on the highest P.I.E. Score which as you can see is the mean (average) of the Potential, Importance, and Ease scores for each Lift Zone (a.k.a. What you are looking to test.) that is being considered. Peep points out, and I agree, that this is a bit subjective in its evaluation since you are deciding which numbers to place in the evaluation parameters, and if you’re on a team you are likely to have differing opinions on the score of each Lift Zone since each is viewed differently by your different teams involved. Your paid advertising team likely views the Checkout page as most important, but the dev department will have a differing view on the ease of executing the changes. So, not a perfect framework, by any means, but it’s helpful in a pinch rather than just shooting from the hip. The way I see it is a simple, flawed plan is better than no plan at all. But, you do you, boo!
The I.C.E. Model is similar but with a more binary approach to the evaluation and using an index score from one to four instead of the one to ten of the previous P.I.E. Model. It looks something like this:
You would apply this to each experiment you’re considering conducting and the score is evaluated according to the opportunity impact:
- 0 = Poor Opportunity (Don’t Pursue)
- 1 = Minor Opportunity
- 2 = Average Opportunity
- 3 = Strong Opportunity
- 4 = Extraordinary Opportunity (Prioritize First)
Again, this is subject to a certain level of subjectivity, but it provides a reasonably effective evaluation for which experiments to proceed with first. Peep provided another CXL proprietary model that provides much more context by categorizing the issue/opportunity with detailed descriptions, plan of action, and priority scale similar to what you might see in various project management software platforms.
There really is no perfect solution to how to prioritize your optimization experiments, and anyone who says so is trying to sell you something. You just have to figure out what model works best for you and make it your own.
On a separate, but equally, important note a fundamental understanding of statistics is vital in all things digitally analytical…and in life (but that’s for a different post on a different day). Averages can be misleading without more granular evaluation, but especially in Google Analytics and GA is full of ’em. There is something reassuring in understanding how metrics are calculated and being aware of where those calculations can lead you astray. You’re probably sitting here, scratching your head and thinking to yourself, “…cool story, Hansel, but what does it mean?”
Well, I’m glad you asked, say you’re considering performing a test on a low traffic form page that has a higher bounce rate than you would like to see. So you go ahead and set up your optimization experiment, launch it, and you wait two weeks, a month, and…nothing. Your experiment comes back inconclusive. But, why? The bounce rate was high! Well, remember, bounce rates are averages, and averages can be misleading without context. Remember when I said that this was a low traffic form page? The bounce rate is an average of the number of exits (bounces) from a landing page divided by the number of sessions. There are two issues that this form page presents, from a testing perspective, in order to achieve a statistically significant result you have to have a certain amount of traffic to conduct the experiment. From a landing page perspective, the high bounce rate could be a result of top-of-funnel unqualified traffic coming to your site, realizing that it’s not what they’re looking for, and bouncing. The problem here wasn’t with the page, but with the advertising execution.
The devil is in the details, my friends. Numbers matter, and understanding how they matter can be the x-factor that is either making you the rockstar that you are or keeping you from realizing your full analytical bad @$$ potential.