Worshipping the open rate fairy and calling it science
Published: Thu, 09/24/20
“I care less and less about open rates the more I realize and learn - from computer scientist types over the years - how inaccurate they are, unless something has changed I am unaware of.”
To which computer scientist Fabien Delorme chimed in:
“As a computer scientist I can confirm this won't ever change...It's the same with websites and google analytics btw. There's no way around it. That's the way the internet is built.”
More:
My pal Jim Yaghi (yet another computer scientist) says Android phones have HTML turned off by default, meaning they aren't tracking opens anyway. And he used to joke about how online marketers claiming to “scientifically” test emails have clearly never been introduced to the scientific method, with no clue about the rigorous discipline it takes to pull a real test off.
Engineer Sanjay Pande once broke it down even more:
===
As you know I'm the geek who has designed and built many of these so called tests, I can tell you your scientist friend is 100% correct.
There are way too many variables in e-mails.
1. E-mail volume is relatively small. The larger the volume the margin for error goes up anyway.
2. Split testing subject lines is useless because you may have different delivery or open rates.
3. It's hard (if not impossible) to tie an e-mail to sales unless the offer is in the e-mail. Even then you don't know what "really" caused the sale. Sometimes it's the sequence. Sometimes it's people's mood. There are a number of causal effects before the sale. The prior e-mail could have been more of an influencer of the sale.
4. A price change (even an increase) can put your sales up or down. The e-mail in this case was not the cause of the sale and all the split tests in the world wouldn't work (Ted Nicholas book flopped at 20 bucks and was a best-seller at 70 bucks to the same lists).
5. Even the average marketer knows that 80% of sales are made after the 5th contact. So, what's the real point of your split test on an e-mail with an offer? This is a problem that even plagues direct mail marketers.
6. Most tests are done in smaller numbers with the premise that rolling up will replicate the results. This is flawed from a scientific perspective again. The sample size changed dramatically which will affect the results and you'll never know why you had such a big hit or a flop - even though it's helpful to have indicators to go on.
All these folks who spout their expertise on testing should really talk to a few scientific people (and perhaps geeks) on how tests are done and how they still mean squat.
People who think they're marketers are the worst offenders followed by the folks who call themselves "real" business people - very few who even understand how these things even work.
Tests will only give you "indicators" and as you said, you really do not know when the same e-mail will work or bomb when re-used (with your evidence).
Thanks for covering this topic. People really should wake up and get it.
===
Another kicker:
Was when my pal Jon McCulloch showed me how Gmail now does something to grab images once and isolate them from the server hosting them — making open rate measuring completely out of whack.
I don't know if this is still the case or not with Gmail.
But, it would not be surprising at all if it was.
Finally:
One of the only people I know who has done an actual scientific test related to email (to try to figure out which ESP has the best delivery) - Email Players subscriber & and former nuclear Navy engineer Troy Broussard - once described how they did it.
And needless to say, it was quite a process.
A process I doubt 1 in 10,000 self-described online marketers or "email specialists" would have the patience to pull off, much less the resources, enormous list size, and other variables required in place.
Bottom line:
If you want to worship the open rate fairy and call it science that is your business.
I won't say tracking opens is 100% useless.
But, I will say it is 99% overrated.
Akin to determining a baseball game's winner by measuring errors, walks, and strikeouts instead of runs scored.
In my experience, what matters is ROI, not some vanity metric people brag about at masterminds and in Facebook groups that has about as much relevance to your business' sales as your highest Frogger score at the arcade in 1983.
Another bottom line:
In my opinion, it's best to focus on things you can control... like tracking sales trends over time, building your world and expanding it with offers, curating your list with higher quality names, keeping your fingers on the beating pulse of your list via consistent daily emails, and continually making yourself better today than you were yesterday at writing better emails and subject lines, curating your list, thinking up attractive offers, etc... vs focusing on things you cannot control (vanity metrics like opens, opt-in rates, etc).
All of which requires no HTML.
No tracking software or analytics.
And, no having to go blind staring at percentages.
Whatever the case, “Email Players” newsletter subscription info is here:
https://www.EmailPlayers.com
If the world-building aspect above appeals to you, this issue could be especially useful for you.
Ben Settle
P.S. My all-time favorite open rate fairy story:
One of my "Email Players" subscribers gave me a testimonial about how using my non-caring about opens ways nabbed his client the most sales in a particular month than ever before...but the client was still worried why his open rate was only 9%.
The irony wrote itself...