Digital Marketing Triggers & Creating a Culture of Experimentation, Volume I

Over 100 years ago, either John Wanamaker or Lord Leverhulme (it’s still up for debate) said, “Half of my advertising spend is wasted; the trouble is, I don’t know which half.” 

Now that marketing has such a vast wealth of data, lamenting the difficulties of knowing what’s effective means nothing more than, “I don’t know how to turn data into insight.”

Digital marketing triggers are the perfect tool for building controlled experiments for anything — you can control for timing, geography, current weather patterns, customer behavior, or practically anything with a ready dataset.

When creating content, we can start from a great place if we just set a plan in motion, listen to the data, and apply known best practices. Don’t expect your content or delivery to be perfect from the get-go. The hard reality is that it will take some work to make your content as effective as possible by taking deliberate baby steps, testing, learning, and steadily pacing toward optimal results. 

When it comes to abandonment, we have an evergreen, rolling customer behavior to test, learn and optimize on for years to come. Abandonment is the first thing many marketers think about when they hear the phrase “digital marketing triggers.” Though triggered campaigns have much broader applicability, we’ll use abandonment examples to keep things simple up front.

Before launching into any experiment, consider the unofficial commandments of marketing experimentation: 

The 10 Commandments of Marketing Experimentation

Today, we’ll tackle the first four — mostly about internal considerations — and later this week, we’ll whip into , which are largely focused on the text audience. 

Have a Hypothesis

Before any experiment, teams should align on a What & Why Strategy.

The What is a statement of what you and your team believe; it’s what you’re interrogating with your planned experiment. These testable beliefs can (and probably should) come from the work your team has done to understand the customer, their journey, and the barriers they face. 

A solid Why will make prioritization easier: simply phrase the purpose of all experiments in terms of expecting ROI (or other KPI) and pick the one with the greatest leverage. It could be complicated by mismatched metrics across Why’s (like say you have 10 experiments you’d like to run, but each channel or initiative cannot be measured that same way). In these cases, default to the one metric that will prove the greatest overall business impact. 

You need total clarity on what you expect to be true. If it’s too vague, you won’t be able to design an effective experiment that will return clean data. 

For example:

“We want to test subject lines to see which gets the better open rate for this week’s send.” 

“We believe that our emails are seeing a low open rate because our subject lines are too generic, and so if we test emojis at or near the beginning of a subject line, we can demonstrate a lift on our current average open rate by at least 10%.”

Now you try with your most pressing testable belief with broader business effects:

“We believe that [current state] and so if we [planned action] we will be able to see [measurable outcome].”

Set a Goal

Setting a specific objective allows us to challenge ourselves and set a minimum threshold for what we’ll accept as success. This target metric can be as high or low as makes sense for the team, but all goals should demonstrate impact on a larger brand-level objective. Otherwise, what’s the point?

Stick to a Process

As a team, commit to a process to identify, define, and deploy the experiments in question. Use this guide as a tool and checklist to get you started. Regardless of how you run it, make sure you can support a purposeful, measurable experiment design and conclusion with data. To do this, you must be purposeful about measuring outcomes. Experiments must remain consistent from test to test. This is easiest with digital marketing triggers, which can be set up and kept in holding until it’s ready to deploy. 

Test One Thing at a Time

You can’t trust the results of a rushed experiment that doesn’t isolate variables. When you create the content for your investigation, be sure to maintain consistency, so there are no confounding factors. 

(In statistics, a confounder is a variable that influences the dependent variable and independent variable, causing a spurious association.)

If you’re doing multivariate testing, you’ll either need a large audience or patience to focus on optimizing one variable at a time. 

That being said, even in multivariate experimentation (MVE), you should be planning test groups for baselines for each of the single variables so you can truly get a full picture of the impact of each variable against the variables together.

Some variants to test: 

If you’re interested in learning about how to manage statistically significant audiences, when to stop and extrapolate insight, or how to react to results, .

Be the first to comment

Leave a Reply

Your email address will not be published.


*