Intro to Test vs. Control
One of the beautiful things of CRM analytics is the ability to design controlled experiments. Random sampling of a homogeneous group of individuals allows CRM professionals to test the impact of a particular CRM offer or message against another offer or no offer, and then after applying some statistics conclude whether or not that offer or message drove a desired behavior, like an open, click, or conversion. This process is generally referred to as Test versus Control.
One potential pitfall that muddies the analytical waters is dealing with customers that potentially “cross-over” from the Control side to the Test side during the campaign. Take, for example, a customer who was assigned to the Control group but somehow ended up redeeming an offer that was assigned to the Test group.
*Gasp!* What do we do now? Our experiment is ruined!
Well, not to fear (sort of). One widely accepted method to handle this problem is to exclude these Control Redeemers, as they are called, from the analysis entirely. The decision to exclude them is supported by the fact that these customers are marked as “dirty” for somehow getting their hands on an offer that was intended for someone else. Herein lies the issue!
These “Control Redeemers” tend to be more engaged customers.
- Therefore, I believe that excluding them creates an unfair bias.
- It seems that this bias is magnified with longer campaigns.
- In the below, “Best” refers to the best, most engaged customers, and “Sec.” refers to secondary customers, or customers who are not as likely to redeem anyway.
The longer a campaign, the higher the impact from Control Redeemers.
- I noticed this pattern within a recent campaign I analyzed and was having a hard time explaining why results appeared so much stronger as the campaign carried on.
- To support my hypothesis of the Control Redeemer Effect, I conducted Sales Lift calculations using two methods:
- Calculating Test vs. Control lift including Control Redeemers in the control group.
- Calculating Test vs. Control lift excluding Control Redeemers in the control group.
- For a 4-week campaign, the lift is 1.2 times higher when excluding Control Redeemers.
- For a 52-week campaign the lift is 3.7 times higher when excluding Control Redeemers.
I felt my hypothesis had been validated. As the length of the campaign increased, it allowed highly-engaged customers more of an opportunity to get their hands on a coupon (either from a friend/family member, from generous Test group customers who leave their coupons in public places, or from gracious sales associates at the store), and therefore dilute the overall sales of the control group.
There are several ways to mitigate this Control Redeemer issue.
- Assign unique offer codes to customers. This way coupon sharing is impossible, and Test vs. Control groups stay cleaner.
- Stratify Test vs. Control results by some sort of “Customer Value” segmentation. Some companies have a “Metals” or “Value” segmentation which ranks customers from Best to Worst. Stratifying results by this segmentation would alleviate some of the pressure.
- Consider replacing the “control redeemer” with a composite of match-pair control customers (from a pool of control customers who have not redeemed), matching on keys from an arbitrary pre-period (say 1 month, 3 months, or a year depending on average purchase cycles). Note: this option is likely going to be analysis-heavy.
- If neither of these above methods are feasible, then ultimately, the “real” answer of “campaign lift” is probably somewhere in between the “including control redeemers” and “excluding control redeemers” methods.
Please let me know if you have other thoughts or ideas on this measurement methodology topic!
Sales Lift calculation (not standard, by any means, but a common one):
In the above, we include Control Redeemers in the total sales lift. To exclude Control Redeemers, we use the below (note: I had to remove the vowels from a few terms below because there is a text limit to the equation editor):