Our data-saturated culture loves research and numbers, but information can mislead. A host of factors play into accuracy that many people may not realize. It’s what separates quality, human-driven research from computerized short-cuts, and it’s what makes us all informed citizens and consumers. In this second piece of a three-part series, Candice Bennett takes a look at different aspects of data-gathering, what they mean, and why they matter. If you missed the first installment, see "The Truth Behind the Numbers."

Sometimes numbers don’t tell the whole story when it comes to judging a study’s veracity and effectiveness, and machines, such as those that power online or automated surveys, can’t always suss out the useful truth from the questionable data.

Here are four things to consider when weighing the usefulness of any survey or research.

Sponsors and Methodology

A study’s methodology should provide a clear picture of every angle research questioning: the sample size, how that sample was selected, and when the data was collected. If a link to full results aren’t provided, that begs the questions: What aren’t they sharing? What is their agenda?

This goes for any research, especially when the group backing the study would benefit by certain findings. The NRA, for example, touted statistics about gun ownership and self-defense that were later debunked. Similarly, oil companies have been accused of downplaying environmental impacts. Did they skew the questions? Cherry-pick the respondents? The source of the information matters.


Charts are notoriously easy to skew, and the fine-print easy to gloss over. In our infographic age, when charts grab clicks on social media and spice up a page, people send random data points to a graphic designer, whose job it is to make it pretty—but that designer may not know what the data points represent, and the resulting picture could lead you astray. 

Pie charts are especially notorious for this. They should be used only when the responses add up to 100: 20 out of 100 people like chocolate, for example, while 80 out of 100 dislike chocolate. But when 54 percent of all men say cell phones are awesome and 25 percent of women say cell phones are awesome—those are two distinct groups, not part of a whole.

Check out the bad example below, from CNN.com. It’s a pie (of sorts) but the statistics are from different populations entirely -- they aren’t all part of the same 100, so it doesn’t really add up.

CNN bad data example

Cherry Picking

An advertiser wants to know if the Kardashians are worth investing in -- their company and their shows. A quick scan of women on your Twitter feed shows that 86 percent of them are discussing Kim and Co. So that’s an easy report about a majority of women being interested in the First Family of Pop Culture, right?

Not at all.

Nothing about the sample is representative and random. It’s a survey of people you know on Twitter -- not the larger public, or the television audiences. Say the percentage of women interested in the Kardashians in a randomized survey across a variety of media falls to 20 percent. That’s a huge difference—enough that if you decided to buy ads on their show based on the 80 percent number, you could soon find yourself having to explain a 60 percent loss.

And then there’s a researcher who wants to ensure a certain outcome, or perhaps ditches negative answers afraid they’d concern a client --- but end up supplying an inaccurate picture of what’s going on.

It’s important to look beyond the title of a study to understand the full details provided — do any of the results conflict with each other? Do you understand exactly who the respondents were, and were all of them accounted for in the study?

Quality Control

People can often tell when someone is lying to them. A machine can’t. In telephone surveys, the human interviewer is the first gatekeeper to reading a “lying” respondent. In an online survey, there are algorithms used to check for “cheating,” and then data processors who look for conflicting responses that would junk a respondent. In an automated IVR poll or a brief online poll, where there is no human interaction, it’s impossible to tell who is on the other end of the survey—and whether their responses seem off.

For some projects, a social media survey or a few short, basic questions might suffice. But for many firms, there’s a desperate need for real data -- usable data -- that truly enlightens new ways to serve their base. That kind of research takes more thought, and more time, but the payoff can be incalculable.

Don't miss the other two articles in Candice Bennett's series, Part 1: The Truth Behind the Numbers and Part 3: Correlation Does Not Equal Causation And Why You Should Care

Candice Bennett also authored "Should Employee Engagement Surveys Be Used in Performance Evaluations?" for us on August 26, 2015.