Ask any data scientist or savvy marketer, and they’ll say you need a large sample size to get your hands on any sort of reliable data.

But what are you supposed to do if you’re running low-volume cold email campaigns? Just ignore the data and keep on guessing?

Not at all. There are still smart ways you can — and should — test your campaigns. 

Why testing small sample sizes is traditionally frowned upon

Traditionally, data scientists and marketers frown upon testing small sample sizes. Think about it. When you test a cold email campaign and send out two different campaigns to, say, 100 prospects, you’re not getting enough data to determine a clear “winner” from such a small sample size.

Why? Because your sample size is so small, there’s no real statistical significance. That means you could be putting all your eggs in the wrong basket — even if there was what seemed like a clear winner.

That’s what makes testing cold email campaigns difficult. Typically, you’re not sending out campaigns to more than a couple hundred people. And even if you get a batch of 500 prospects, you’re likely going to want to break up your campaigns into various segments so you can send a more relevant and personalized message.

So, a traditional marketer might conclude these campaigns just aren’t worth testing — it’s a waste of time and money. But testing cold email campaigns is a totally different game, and it’s not something you should give up on. Testing even these small sample sizes can be super valuable to your strategy… if you know what you’re doing.

Testing cold email campaigns — how to go about it the right way

So, to answer your question, yes, you should test cold email campaigns. Even if you’re working with a smaller list of prospects, you can collect valuable information to help inform your cold email marketing strategy.

Here are some ideas to help you start thinking about and testing your campaigns the right way.

1. Tap into the qualitative data (aka prospect feedback)

Let’s say you send out a cold email marketing campaign to 100 prospects. No, you can’t really rely on the data coming from that small sample size, but you can tap into the qualitative data — or responses — you get in return.

Even if you only get 10 or 20 responses out of the bunch, this is still incredibly valuable, especially if the responses are negative. That might sound strange, but negative responses really give you the opportunity to improve your campaign.

Here’s an example: You send out 100 emails, and you get 20 responses. Five of those responses say, “No thank you. We don’t deal with international teams.” This gives you a direct line to your prospects and helps you understand how you can better tailor and target your future campaigns.

That’s the other thing: This data is super direct. Unlike quantitative data where you still have to jump to conclusions about why people aren’t clicking on this or that, qualitative data oftentimes gives you a more clear conclusion.

A prospect, for instance, may simply say, “We hate purple, and that’s why we won’t use this on our website.” You would never receive that type of transparent feedback through quantitative data.

We recently experienced this in one of our own campaigns. We sent out 100 emails and of those emails, we got nine responses. Now, that might not seem like a lot, but five of those nine emails had the same theme. We took that theme, addressed it in the email, and we immediately more than doubled the response rate for that campaign.

All of that is to say, you don’t need to aim for a specific number of responses — you can still pick up valuable data from just a handful.

So although the big data, like clicks and conversions and all that good stuff, seems like the most insightful feedback, this type of qualitative feedback — from actual human beings who’ve read your words — is extremely valuable to you and your cold outreach efforts.

2. Have at least two A/B tests running at once (but no more than five)

Testing cold email campaigns isn’t just a one-and-done thing. It’s something you’ll want to continue to do. There are a few reasons for this.

First, even if you build a campaign around your winning variation, you have to realize it’s eventually going to lose steam — no matter how good it is. Either the content is overused, or people could even mark it as spam.

Second, there is always room to improve, and by testing continuously, you can maximize your returns.

So, what will this testing process look like? For starters, with each test, you’ll generally want to focus on larger optimizations. Rather than tweaking a sentence structure here or using different words there (like hi versus hello), you’ll want to vary your email structure, switch up the images you use or change the length of the email’s intro.

Logistically, we generally recommend letting your A/B test run until you have enough responses — either positive or negative — to inform your next variation. Then, you’ll remove the weakest performer and replace it with the newest variation.

There’s no set amount of time you should run these tests, but even gathering something like eight responses might be enough to inform your next moves.

If you’re just getting started with testing, a solid strategy is to test two of your value propositions against one another. For example, your first variation might highlight a money-saving angle and a second variation might highlight a time-saving angle.

If you’re not sure what value propositions you want to highlight, try this: Test a sequence super high in the sales funnel. Don’t mention any value propositions — don’t even mention your product or service in the email.

Instead, reach out to see what your prospect is working on and explain why you’re interested in meeting with them (not to pitch or sell!). You might be surprised how many people you can get on the phone to talk about their general pain points, which can then help inform your marketing strategy moving forward.

From there, you can continue to test different benefits or value propositions, or you can start applying the feedback you receive.

You can then just keep repeating this process. Even if these tests are low volume, you should receive some valuable feedback.

3. Leverage your inbound leads, too

So far, we’ve focused on our outbound email marketing efforts, but it’s important you don’t forget about those inbound leads. They’re incredibly valuable and can help inform your cold email outreach strategy.

As you know, an inbound lead comes to you more organically. A prospect has clicked an article on Google, digested your content on social or opted into weekly emails.

You can examine the analytics around your inbound leads and determine what performed well and what resonated with your buyers. Then, work backward. Because you already have data telling you what worked well and what didn’t, you can build an email campaign around that strategy to test.

Here’s one example of what this might look like: You get folks subscribing to your email list. Once they subscribe, you send them a call-to-action follow-up. If you’re gathering data and insights, this might be a question, like “Tell me a little more about your email strategy,” or “Do you use XYZ platform?”

Using this inbound data can help you more quickly inform your outbound cold email strategy. No, not all of us have the luxury of tapping into this data, but it’s worth leveraging if you can.

By following these data-gathering and testing strategies, you can not only increase your response rate and positive leads, but you can also maintain your deliverability rates thanks to all the variation.

If you want a powerful tool to A/B test your B2B campaign(s) or onboarding emails, start a free Quickmail trial to implement the strategies we discussed and enjoy cracking the code with a low volume of emails.