A Well-Designed Employee Survey Goes Well Beyond Layout Design
It’s no wonder some folks spend years earning their PhD in the art and science of surveys. Complexities abound. Like just about anything, everything can be argued one way or another. Granted, there are very specific clearly defined, not-to-be-deviated-from truths when it comes to good survey design.
Let’s talk response distribution norms for instance. There’s no disputing that well-designed employee surveys — from the phrasing to the flow of questions (or “items” as they’re called among survey research pundits) — has statistical properties that prove efficacy.
Scientific design for validity’s sake
Times change. People change. Workforces change. Opinions shift.
With these sorts of considerations in mind, responsible survey vendors conduct regular statistical properties audits. Are survey items properly structured? What about flow? Are the different categories used to measure engagement still relevant? Does the questionnaire produce statistical outcomes compliant with American psychometric guidelines? Are the insights statistically sound and by extension a valuable resource for enhanced employee engagement and strategic post-survey action planning?
Essentially “well-designed” employee surveys get responses across the full range of scale options with most responses falling into the neutral or slightly positive rating. When plotted on a graph most of the data will be near the middle forming the shape of a hill or bell curve. That’s normal distribution. Anything otherwise suggests poor structural design.
Aesthetic design is not so black and white for employee surveys
When we do a Google search about survey design format, much of what’s out there is penned by thought leaders “of the day”; a good many with the good intention of bolstering the name and reputation of their organization while positing their views as the most credible and informed.
Readers be aware.
Consider your sources.
Are we talking serious science?
Or the art of promoting a company’s products or services?
Was the white paper or editorial piece written recently? Does it consider the workplace of today? Is it decades or even just a few years old and not quite so relevant? Think Millennials and their impact in the work world.
A considerable amount of material bears publishing dates from the early 2000s or even further back. When you consider that the Internet came on-stream for the masses in 1995, comments made about online survey design back in those days don’t necessarily fit the au courant survey experience.
At TalentMap the other day there was some discussion around questionnaire layout attributes. Continuous scroll vs segmented thematic breaks or sections? Numerical scale structure (one through five or seven or 11) vs words (strongly disagree – disagree – neutral – agree – strongly agree)? Which approach is better? Best?
Neither and both. None is more correct than the other.
From a pure research design perspective: a randomized no categories approach is good when exploring or confirming new ideas, testing brand new items (or questions).
“But if you’ve already got data, comparative data, you don’t have to do this,” says Sean Fitzpatrick, Founder and President of TalentMap. “When standard items are proven, the pros outweigh the cons.”
As organizations, we want high response rates when it comes to employee surveys
- If there are 120 items on a survey and employees scroll through question after question, on and on and on and see how long the survey is, frustration is not improbable. Respondents want to see they’re almost done. After all, we’ve been conditioned through our experiences with the Internet to expect quick reaction times. A survey with thematic breaks provides that interactive response through a “done that section – on to the next” kind of gratification.
- And without section breaks some people reach a point where enough is enough and pack it in without finishing (affecting completion rates).
Without section breaks, it also takes people longer to complete a survey.
- Research shows respondents will score items (or questions) more as a group when the items are presented in thematic sections. But that doesn’t change distribution (the bell curve norm). What it does do, though, is speed up response times.
- As organizations, we can get more employee engagement data by being able to ask 120 questions instead of 90 knowing the questionnaire can be completed within the same amount of time it takes to complete a randomized, continuous-form survey.
As for scale, what “strongly disagree” means to you could mean something slightly or significantly different to someone else. We don’t really know how people interpret words used in a scale but what we do know is, regardless of a word or numerical scale approach respondents know the middle is neutral territory and work from there.
When it comes to a well-designed survey, the issue really isn’t format. What it boils down to is normal distribution.
For skeptics seeking tangible proof, you’re more than welcome to make survey layout modifications to an existing well-designed, statistically validated survey — ideally, one you’ve used before. Implement the survey in two different layout formats if you wish. It’s doable. It’s been done before by the likes of TalentMap and others. Though the look will be different, there won’t be any difference in the data outcome.