The Damage Done by Data Sea Monsters
A look at 4 cases where the "data sea monsters" combined to create devastating data damage for marketing departments and marketing leadership.
Watch how one team approached marketing data like a scientist through this marketing data case study presented by Melissa Mines of Bulldog Solutions. Melissa’s team used questions, hypotheses, and results in an iterative process to get the most out of an email marketing campaign.
This 16-minute marketing data case study video is part of the session “Drowning in Data, but Thirsty for Insight?” given by Elizabeth Crinejo and Melissa Mines at ConnectToConvert, August 21, 2017.
So, we’re gonna go through a series of slides here from one particular customer, and we wanted and we liked the concept of—actually, let’s focus on email a little bit. The great news is that there’s so much that’s out there in terms of predictive analytics, predictive data.
There’s so many wonderful ways that we can reach people from digital and online, but you know what? If we can break things down in terms of getting a bit smarter for something as old-school as email, maybe there’s some nuggets in here and some commonalities, and that’s kind of the point, is that there’s some commonalities in the basics that apply that we think and we know actually can be translated into others, and we’d love to learn how you’re applying them.
So, it’s a basic game, right? We’re trying to get attention and drive engagement. Historically, email has been something that’s just kind of has been that annoyance in your inbox. “Hey, hi, pay attention to me.” But there is science now that we can apply where we really look at engagement. It’s now part of the marketing mix. It’s now part of the whole integrative effort.
So, as you’re doing something in an awareness level where you’re hitting people from a social perspective, now you can actually—this is all part of the blend of how you can communicate to people in different ways. It’s that digital bird seed that we can kind of drop across.
So, we want to look at measured engagement, really look at—we can now apply the insights of the scoring, the qualification, and start to look at behavior. So, we can start to actually see it’s a bit of a psychological understandings and some of the behaviors of people as they interact with something as old-school as email.
So, again, an overview, and I’ll spare you reading you every word on the bullet, but you can kind of see what we were trying to do. We were looking at purchase contacts. We were trying to see, you know, the emails directly driving to assets. We wanted to decrease abandonment, some of the basics that many of you are working with today.
Also, there was no CRM integration. As much as what we’ve talked about the beginning part of this is an ideal state or some frameworks that are ideal, the reality is—we’re working in reality, and we don’t have all of our systems that are connecting. We don’t have everyone who’s talking. We have data that’s actually dirty, and it’s not quite working in the way that we want, and so what we really wanted to do was to look at how we can improve ROI and look at how we can increase by the science that we could apply to some of the different motions and we wanted to run.
And so how do we do it? I’m going to give you the punchline first. The numbers were pretty good. We increase, you know, unique open rates. Our dials-to-deliverables were good, and we had an increase in scheduled meetings. That’s nice, right? Obviously, we picked this for a case study because it’s pretty successful, but I think one of the ways or some of the ways that this became successful is that we didn’t try to solve everything at once. We broke everything up into small bite-sized bits and looked at our numbers, and so I’ll kind of show you how this built a little bit.
So, again, this is a timeline. You may not have this long of a timeline. It’s okay. If you have three months, if you have one month, the point of this slide is to plan, is to look at what are your main goals. What are you trying to accomplish and then break up the different things that you can test back to what we looked at earlier, which is that this daunting amount of data and metrics can actually be categorized.
So, you can take a look at overall what are you trying to accomplish, look at the different campaigns or programs that you’re running, and begin to test different variables, and we’ll go through some of these. I’m not going to go through each of the different things that we tested. I’ve just picked about four or five that we can take a look at. I’m happy to go through the larger detail at any time if you’re interested.
So, let’s first look at audience, right? We talked about starting with the audience and starting with the end in mind. The goal was to cultivate an engaged audience who’d consume more content, and so what we first wanted to look at was the product segmentation to see if we could start to look at and get a little bit smarter at the audience based on the product that we were targeting and then look at active versus inactive and then basically do a purge of our inactives. Sounds pretty basic, right?
So, again, the first thing that we did in terms of isolating was if we could actually divide up and begin to look at our customer set, the customer set that we were looking at based on actives and then inactives and those who were new, could we glean some insights in terms of driving better behavior? If you’re to look at the top bar or the bottom bar, it’s kind of hard to tell. You know, if you look at the bottom, performance doesn’t look that great, but if you can divide in segments of your audience in terms of actives and inactives, you can start to see which of your groups are performing higher, where do you need to clean up your data and just purged from the system, and so that’s a good first step, kind of basic.
But then the next thing that we did is we actually thought to ourselves, “Hey, these inactives, the people who aren’t interacting with us, I wonder if we can get them back.” Because the first jump to conclusion was well let’s just purge the inactives. Let’s get them out of our system then we have to clean data, but what if those inactives just needed to be touched again? What if they potentially could yield some benefit for you.
So, what we did is we actually sent out an unsubscribe email and just one last touch. Are you interested? And you know what? We actually got a decent response rate in terms of those who were inactive coming back and subscribing. So, you know, it’s never too late to just see. You know what we did is, if they didn’t reply, then we actually purged them from the list, and we got them out, and that helped us with the clean data, but what we found that those that came back actually were interested in being contacted again. It’s kind of like they came back into the family. The results were surprising from that perspective. So, that’s a little bit about audience, kind of flying through this, but I want to get into a few other examples.
So, subject line testing. So, hang with me. This is actually pretty down in the weeds, but it’s pretty interesting, I think, when you take a look at it. We wanted to see if we could create the perfect subject line. Now, what I’m going to show you may not apply to your industry, but you can do a similar type of test in your industry because high tech and technology may not fit if you’re in a consumer business or if you’re going after some other form of B2B, but we went through a series of subject line testing where we did some A/B. We started with the traditional and then we introduced a little bit of personalization. Good news – actually we saw engagement increase. We saw our open rates increase.
So, then we went through a process. We tested over 180 subject lines just to look and see what we could learn. So, the question was, do we have enough subject line testing to write the perfect subject line? So, if we looked at history and we could test to look at type—what type of email was it? Did personalization matter? What if we mentioned a product? What if we blended these? We did a series of A/B testing just to see, and did character length matter? We’re in a 140-tweet world. Does it matter? Does it not? Is shorter better? Is a little longer better? So, the results were very interesting.
Again, what we found was that personalized obviously makes a difference. I think the Delta was the most interesting to us. You can assume it’s better to be treated like a human than not. So, that’s pretty interesting. City doesn’t matter. A lot of things that we hold kind of near and dear to our hearts don’t matter as much, but it also—when you start to look at the number of characters, there’s a science now behind the content and the phrasing.
There’s some amazing tools that we can take a look at that help us analyze that, whether or not the tone and tonality is interesting, but what we found is that actually shorter is not necessarily better. For this particular audience, between 70 and 79, not 80, characters made a difference. So, when you add these in, you personalize, you acknowledge the company that they’re with, you acknowledge the brand that they’re with, and then you keep a nice tight subject. Again, I think the results speak for themselves. So, again, there’s some examples of how that can be applied in terms of the email. So, we put this into the wild and let it run.
So, again, if we can take a look at some of the calls to action—so the next step is, all right we’ve looked at the audience. We’ve gotten a little bit smarter there. We’ve looked at the introduction in terms of how we’re going to appeal to you. Now let’s look at the call to action. So, again, if we take a look at—if we can track the clicks and the call to actions to determine how many contacts are engaging, you can actually see that if you apply and take a look at the types of conversions by week, we noticed that certain domains were engaging with every piece of content.
So, that helps you get a little bit smarter, and then we wanted to take a look at—because the next thing what we see so often is that we want to put everything in the body of the email. We have you. We’ve appealed to you. You’ve opened it. So, now let me serve up three and four different assets that you can play with.
Well, what we found was that isn’t necessarily the sweet spot, and I know this is a constant tension. Having spent a lot of years in B2B space, working with product teams, and within wanting to put their latest and greatest product in and us trying to put more, I understand and I appreciate the challenge here, but really simple is better, and so if you have this flow of attaching and being smart with going after the right person, making sure, and if you can see over here to your right, if the introduction actually matches to the asset, they’re going to see really you only need one.
You don’t need a whole laundry list of your latest and greatest in there. The efficacy just wasn’t showing in the numbers, and, again, we may have satisfy three or four different executives over on the left, but on the right is really the way that people want to apply. So, in terms of what Elizabeth mentioned earlier, yes we all see what would a marketing presentation be without knowing your customer, but the reality is you can now apply science to really understanding how they want to behave and how they’re interacting and what they want to see.
So, I’ll kind of skip through for the sake of time, but, again, personalization. What we found was not—it didn’t matter if you continued to personalize all the way down through the email. It was really hitting them upfront and then you don’t necessarily have to personalize all the way through. It starts to look a little bit like a form, you know. Like it’s kind of like you’ve put them through a personalization generator a little too much. It loses a bit of the authenticity, if you will.
So, finally I’m going to go through just—again, if you think about kind of what we’ve looked at, and one of the main points is in this world of what can be seeming as very, very big, really break up and ask to the questions, to that point of what we looked at a couple minutes ago, of line up your timeline, really work with your team to identify what questions are our customers asking? What questions can we asked and let’s put hypotheses around it. Instead of swinging for the home run, it’s kind of, for those of you into baseball, it’s a bit of a small ball. Let’s go for the base hits because that gives us better insight in terms of what’s really happening.
So, from a time perspective, again, you can take a look at time at when emails are sent. You know, often if we just assume that if everything goes out on a Tuesday at 9 a.m., it’s effective, right? That’s what we do. Don’t ever send anything out on a Monday. Heaven forbid you send out anything on a Friday afternoon. That’s the dead zone; don’t do it.
You know we didn’t find much benefit of sending something out on 9 a.m. Eastern Time. Everybody’s getting in the office. There wasn’t much delta between the different time zones either, and then also when it comes to days of the week, at least for this audience—now, again, this is a tech audience that we’re going after here. Look at that, Saturday. You may find something different with your base, but let’s think about ourselves. Let’s look at the data, see what it tells us, and then think about ourselves as humans.
When we’re in the middle of meetings and something hits us at 4 p.m., now we might find, we might click on it, but are we engaging with it? The answer is probably no. When do we do most of our catch-up reading? When are we really looking at the backlog of things and say, “Hey, you know I actually wanted to take a look at that piece that ended up in my email box or something that someone sent me”? Nowadays it’s Friday afternoons, it’s Saturdays, and it’s Sundays, and so while we don’t want to over rotate and become annoying in terms of hitting people on the weekends, we also need to realize that people have asynchronous behavior. We save things, we send things, and so clicks and engagement are going to look a little different, and so, again, data don’t lie.
Let’s look and see what we have and look and see what your audiences are telling you in terms of the times of the day and think about that. You can really outsmart competition by looking at behaviors, and then, again, this is—I kind of led the point a little earlier, is that what we found is that, you know, people would open at 4 p.m., but they would actually click on the engagement kind of in the mornings before their days we’re starting.
So, a little bit of takeaways from the case studies that we’ve just gone through is that don’t make early assumptions. Be as objective as you can with your team in terms of what are your goals and how do we want to break this out into small bits.
Look at your desired outcomes and develop hypotheses. It’s a science game. Plan the work and work the plan. We’re all under the gun for speed. I get it. We may not have seven months to do something, but you will never regret taking a little bit more time out in front to plan.
It’s an old adage, but you know it’s a battle that we fight constantly, and it’s something that I know as we work with clients to ask them just to spend a little bit more time to think and then, you know, just having been on the client side for most of my career to ask our team to just think and plan, and improvements are guaranteed. Sometimes the numbers come back and you know what? They just didn’t work, and that’s okay. That’s all learning, but the goal is to keep, keep communicating.
I cannot underscore or overemphasize the importance of just making sure that left hand is communicating with right hand and that you as managers and as leaders are making sure that you’re talking to your executives and keeping them informed. Bring everyone along to the importance of the numbers that you’re seeing, and don’t be afraid to show when things are red and not green.
A look at 4 cases where the "data sea monsters" combined to create devastating data damage for marketing departments and marketing leadership.
Using a three-step framework based on the scientific method, you can apply your marketing data to content strategy and development.
Elizabeth Crinejo of (un)Common Logic introduces the 6 main causes of bad marketing data in this video. Does your company have any of these issues?