The never-ending debate on employee surveys
Social psychologist Rensis Likert was a visionary guru. His contributions to organizational psychology influenced businesses of the day. And his theories of participative management can be said to have shaped today’s concepts of employee engagement. But it’s the five-point scale developed for people to respond to questions that’s his greatest legacy. Likert’s namesake scale is the most common approach used in employee surveys. It’s also one of the most contentious.
Is it best to force answers to an employee survey using a positive numbered scale (strongly agree-agree-disagree-strongly disagree) or to include a fifth ‘don’t know/undecided’ option? What do neutral responses mean? Such are the arguable conundrums of researchers and statisticians. Typically, as is the way of research, one position invariably triggers another. Let’s consider a few.
Does the placement of the ‘undecided’ category affect responses? Should it be at the midpoint of the scale or separated from the scale? What about the wording?
A U.S. software company reports respondents are more likely to choose the ‘undecided’ category when it’s off to the side of the scale. There are also different response patterns, they say, depending on whether the midpoint is labeled ‘undecided’ or ‘neutral’. Others concur, suggesting different neutral words can mean different things to different people. In a string of correspondence discussing Neutral Options on Perception Surveys, GovLoops comments: If the only non-opinion option is ‘neither agree nor disagree’, people who check that off will include those who think the question is not relevant to them, people who don’t feel they have enough information to make an informed choice, and people who can think of reasons to be positive and reasons to be negative, but can’t make up their mind.
Should a neutral option even be included?
In a paper written for the U.S. Naval War College, author Douglas Ducharme observes some respondents resent not having a neutral option. For this reason, a researcher should avoid the risk of getting respondents frustrated and disengaged from the survey, he writes, adding people engaged in a survey yield the most reliable data. Ducharme also shares researchers’ cautions that when respondents are asked questions about their own lives, feelings, or experiences and a ‘don’t know’ option is offered, it allows the respondent to avoid the work required to give the answer. On the flip side, he points out, others argue that sometimes there are things asked that the respondent legitimately lacks the knowledge about.
How does a neutral option affect data?
Writer Jeff Suaro points to concerns that a neutral option plays into the hands of respondents who lean slightly toward a favorable or unfavorable response. A neutral response masks these sentiments. He cites research by Presser and Schuman that found between 10 to 20 percent of respondents chose the neutral option when it was provided compared to the same survey when it wasn’t. The conclusion: a neutral category provides an easy out for respondents who are less inclined to express their opinion, but potentially means a substantial proportion who favor or oppose a topic aren’t counted.
Not good for data dependability.
Then there’s the question around central tendency bias
Think of the manager who gives all employees on their team a mid-range performance review score. Achilleas Kostoulas, a researcher at the University of Graz in Austria, suggests most respondents tend to avoid voicing extreme opinions. He asserts many are averse to taking a stand on controversial topics. The combined effect of these tendencies, he says, is when presented with a ‘safe’ choice at the center of the scale, respondents are likely to select that, rather than reveal their ‘true’ opinion.
More data unreliability.
Conversely in an interesting discussion of survey methodology on the American Institute for Research’s LinkedIn group, participant Brian Lashley argued “if you force people to either agree or disagree and don’t give them the option of ‘neutral,’ that dirties up the data more than letting them choose neutral. It creates measurement error because if people don’t feel that they have a strong preference, they may choose to skip the question or put down an answer that doesn’t represent their real thoughts.
Ditto those sketchy data issues raised by Soulas and Kostoulas.
Reliable data. That’s what employee surveys are meant to deliver. Otherwise how can any organization reach valid conclusions. The elephant in the room is planning for and interpreting neutrality.
How questions are phrased is a good first step. Ask yourself if the topics in your employee surveys are familiar to your intended participants. Is it reasonable to think they should have an opinion? And what is it, exactly, that you want to measure? Consider this: An election is coming up between two candidates vying to represent the community where you live. You’re asked which is best qualified and you view both equally. Your answer? Probably neutral. If you’re asked which candidate you’re likely to vote for and have a personal preference your answer will be more specific. It all comes down to what the survey is trying to assess: perceived candidate qualifications or voter preference. (Hmmm, can’t help but wonder how pollsters phrased questions leading up to the 2016 U.S presidential election given last minute predictions and an ever-so-contrary outcome).
TalentMap’s position is that a neutral answer is a neutral answer. Period. What employees are expressing is an inability to pick a side. If, for instance, a neutral answer is given in response to the statement: I’m proud to tell people I work at ____, there are some reasons why the respondent is proud, and other reasons why they’re not.
An organization’s inclination is to want to lump these neutral answers in with positive or negative responses. But the best way to get a clear read on what your employee surveys are telling you is to keep undecided opinions separate. Give them a category and percentage unto their own. There’s an upside potential to those noncommittal responses too. Negative perceptions are more difficult to resolve than the inner conflict expressed by neutral, undecided employees. That’s something you’ll want to keep an eye on and measure in future employee engagement surveys.
Subscribe to get updates
This is a no brainer. All you have to do is remember a time when you received excellent service from someone on the job. If that person gave you service that was respectful and fast, they knew the product or service inside out, and they seemed genuinely glad to serve...
For many years now, organizations have used surveys to measure their employees’ engagement. But as described in this Deloitte article entitled, “Becoming Irresistible. A New Model for Employee Engagement,” written by Josh Bersin, if they aren’t utilized properly, they...
What Is the Difference Between a Satisfied Employee and an Engaged Employee? There’s a big difference between satisfied and engaged employees. As noted in the article entitled, Satisfied Employees vs. Engaged Employees: How to Spot the Difference, engaged employees...