The NFL is getting ready for the annual combine. This is where players get tested both physically and mentally to see if they’re NFL material. There is psychological testing to test intelligence. They run the 40-yard dash. It’s a 4-day job interview, much of which plays out on TV.
Teams use the data to make decisions about which players to select in the annual draft. They can stack the reams of information from the combine with the data generated over the course of a player’s college career and choose someone who will, hopefully, fit into a team’s depth chart as well as its philosophy.
Anyone who follows the NFL will tell you that all of this data has its place but it’s far from infallible. Kurt Warner, a 2-time NFL MVP went undrafted. So did Warren Moon, a Hall Of Fame quarterback too. Put Tony Romo on that list as well. No team looked at the data and thought any of these men were worthy of a draft pick. Oops.
You just might be guilty of the same thing in your business. The data isn’t infallible and the data only measures what it’s designed to measure. Tom Brady (selected 199th in his draft year) recently told NFL prospects that they can’t measure heart. He’s right, and it’s because there isn’t a solid way to capture that data.
How are you making this mistake? You might be using one data point to draw a conclusion that isn’t right. Correlation isn’t causation, as we hear so often. Grateful Dead fans don’t all smoke pot and have long hair. Identifying a target as those fans doesn’t mean you should be promoting to the stereotype.
Another faulty conclusion might be due to an error in the data itself. I had an advertiser on a site I ran complain that they weren’t getting great results. They had neglected to respond to a question from their salesperson about turning on frequency capping to extend their reach and limit the number of times a day someone saw their ad. They were reading the data correctly but the data itself was faulty due to an underlying issue.
One of my favorite data error is the foundation of the entire TV business, the Nielsen Ratings. The TV and ad industries have attached an accuracy level to Nielsen ratings that even Nielsen says is unreasonable. A study of a few years back found in analyzing 11 years of data that the margin of error for reported results was often more than 10%. That might not sound like much but it can represent hundreds of thousands or even millions of impressions. The issue here is that buyers are too focused on the (inaccurate) numbers rather than on precise metrics such as sales.
Measure what you can measure. Don’t extend that measurement to other things that aren’t measured as well. I bet your results will improve. Let me know?