Honey

Honey

Optimizing Creative Weighting (and Backtesting our Models)

Honey is a technology company that created a browser extension which automatically finds, curates, and applies promo codes to save consumers money while they shop online. Much of their advertising appears to be display ads, but the videos they run are effective at generating high levels of attention and emotional engagement – no surprise there, really, as we now have objective proof that they’re funny 🙂

Version B

On one of their recent campaigns, Honey released two versions of a 60 second YouTube spot. The only difference between the versions was the middle third of the video, in which the scenes had been reworked with different graphical overlays and voiceover. Often, video marketers will realize a spot isn’t performing as well as expected and will edit it in the hopes of saving the creative. Alternatively, they might create multiple versions at once and then flight them simultaneously to A/B test them.

Thus, there were two goals to this case study – to identify whether our models would pick the correct version (the faster that version is picked, the faster spend can be shifted away from lower-performing creative) and if so, to verify that our models identify the altered section of video that drove the difference in performance (the ‘why’).

Clearly, Honey decided that version B was superior in this case, and decided to shift their spend to it (just look at the view count!). This kind of decision is an example of what is called creative weighting.

Version A

This case study was unique in that we already know which video performed best, as they’ve been in market for months. But one of the ways Attently provides value to marketers is in more quickly discovering the insights that would take significant time and spend to arrive at using conventional A/B testing, in addition to telling ‘why’, as in, ‘why did this version work?’. Thus, there were two goals to this case study – to identify whether our models would pick the correct version (the faster that version is picked, the faster spend can be shifted away from lower-performing creative) and if so, to verify that our models identify the altered section of video that drove the difference in performance (the ‘why’). Why does the ‘why’ matter, when the creatives at Honey know exactly what section they changed? Often, knowing what to change in each iteration of an ad is a product of guesswork and intuition. By knowing exactly which segments of a video are driving results (and which aren’t), marketers can pursue a much faster, more cost-effective approach to both editing and creative weighting.

Before we get into the data, here’s the winning version of Honey’s spot. (Version B)

The Approach

We created two opt-in panels with similar demographics from the Attently Audience userbase. Each panel was exposed to a different version of Honey’s spot, and panelists’ levels of attention and emotional engagement were measured using computer vision analysis of their webcam feeds. The resulting data were deidentified, cleaned, aggregated, and smoothed.

Analysis of the data revealed that our models predicted that version B would outperform by a wide margin, further validating our assumptions. In addition, the attention curves for the two versions of the creative tracked one another closely – except for the middle third! If you recall, that was the only section which differed between the two versions.

The Results

This case study demonstrated several validation points for Attently’s technology:

  1. Effectively predicted which version of the creative would perform best
  2. Made the prediction within hours
  3. The section of the two videos which the creators had changed was the only section where attention responses differed

Now, brands like Honey can test their videos either before they run them, or alongside their campaign, to know which versions to allocate spend towards – faster, at lower cost, than was previously possible.