Start your personal resilience journey today

iResilience
Main menu

Why bespoke health and wellbeing surveys are seldom enough

Robertson Cooper’s Good Day at Work survey is the industry leader for collecting comprehensive data about your employees’ experiences. Based on decades of published research, the results empower you to take targeted, coordinated actions that create more Good Days at Work for all. Founding Director, Professor Ivan Robertson, explains why there is so much value in validation.

Why bespoke health and wellbeing surveys are seldom enough

It might seem that the best way to get data about the feelings and views of employees in any organisation would be to ask a set of questions that are tailored specifically for that purpose. In addition, it may be tempting to ask a lot of questions to make sure every angle is covered. If this approach is followed, though, what will it produce?

One thing it will definitely produce is a lot of data! Each question will have a score, and there will be lots and lots of scores. Unfortunately, if there is no underlying model that shows what aspects of working life drive wellbeing and what outcomes are influenced by employees’ health and wellbeing, it will be very hard to see what the results mean and what follow-up action should be taken.

In fact, with most “blunderbuss” surveys of this type the data overwhelms people’s ability to do anything with it. Think about the stack of data that lands on every manager’s desk after an engagement survey. The results can create a lot of noise and confusion and fail to provide meaningful insights into what is happening, what causes what – and what to do about it. In the long run, there’s a risk that the results will have very little worthwhile impact.

There are a number of reasons for this:

  • Interpreting the scores when you get them

    Getting some kind of score for each question is a starting point, but how do you know whether the scores for each question suggest that things are OK or that there is cause for concern? Is a score of six on a ten-point scale good or something to worry about?

    Response bias is a well-known effect that causes people to respond in different ways to certain questions. This makes it difficult to compare raw scores on different questions without some form of benchmark. For example, co-workers are an important reference point for most people and the response to questions about co-workers will be different to questions about senior management. So, 6/10 for managers may be comparable with 7/10 for co-workers. As such, questions about specific aspects of the workplace may produce widely differing scores.

  • Knowing what questions to include

    Without the benefit of prior survey data, how do you know what questions to ask in the first place, and what is important? Of course, it is possible to conduct focus groups and use the data from these to design questions but there is a significant cost in doing so, and the problems of interpretation mentioned above still apply.

  • Understanding how to use the results to take action

    A bespoke survey could produce a great deal of specific data, but being able to take action based on the results requires knowledge of the drivers of the survey responses and the likely consequences of different follow-up actions. Unless the survey questions have been based on a proven model of drivers and consequences it will be extremely challenging to use the responses to determine what to do and support consistent action across an organisation.

A more fruitful approach is to use existing surveys that contain validated question sets derived from an underlying model of drivers and consequences. A small number of bespoke questions focusing on specific issues could be included too.

When benchmark data (i.e., normative scores from other comparable organisations) are available, or when you can create similar benchmarks for individuals and teams within your own organisation, the problem of interpreting scores is immediately overcome. The scores obtained can be easily categorised and interpreted because they can be compared to other businesses and teams. Using normative data, the raw scores are placed into context – and, for example, 6/10 for managers may be a comparatively good score, whereas 7/10 for co-workers may be relatively poor. A score of even 2/10 for bullying may be much worse than comparable organisations but 7/10 for support from the boss could be very positive.

There is a major difference between a set of bespoke, stand-alone questions and a set of questions designed to measure specific aspects of working life that are part of a validated cause-and-effect model. There are huge benefits to knowing that your questions are measuring something meaningful.

For example, scientific research into the causes and consequences of health and wellbeing in the workplace has uncovered: (i) a specific set of barriers and enablers that determine wellbeing at work and (ii) the personal and work-related outcomes, driven by wellbeing. All these factors can be captured in a clear cause-and-effect model that then provides the basis for measurement.

What’s so good about using a cause-and-effect model?

The model above shows the six essential workplace factors that drive health and wellbeing and some of the key outcomes determined by the wellbeing levels of the workforce. It is self-evident that measuring these ‘six essentials’, plus the health outcomes and engagement outcomes, provides a clear picture of wellbeing levels and the workplace factors that are driving these levels.

The results of a survey measuring these things can then be used to design follow-up actions. For example, in the illustration above, resources and communication problems are a cause for concern and may be creating strain for this group of workers. This result immediately suggests follow-up actions, whereas results from a set of questions not derived from a cause-and-effect model, using validated questions, are much less helpful in determining “what now?”, or worse still ‘so what?’ Furthermore, if the questions have not been carefully validated, the results could be misleading and actions derived from them could make things worse, rather than better.

Conversely, if you know that your survey is underpinned by a validated model, you have the insight and confidence to take positive action.

What does it mean to use validated questions?

The process of validating a set of survey questions is complex, but the primary goal is to ensure that the questions actually measure what you want to measure. The validity of any single question is examined by checking the meaning with a sample of people and by comparing the replies to the question with responses to pre-existing questions in the literature that are already validated. Only when it is clear that the responses to the questions provide accurate information that compares with other measures is the validity requirement met.

A second important feature of using validated questions involves combining questions that address a specific topic into a scale that is based on a cluster of several related questions. Scales that ask about a topic in different ways provide much more reliable and accurate results. For example, a single question about work relationships does not provide the coverage of a scale made up of questions about co-workers, supervisors and senior management. Each individual question from a scale can be looked at on its own and the overall scale score also contains a useful summary – and provides a route for specific action to be taken.

Understanding the most important drivers

A final and very powerful benefit of a clear cause-and-effect model, where specific drivers and consequences are measured with valid questions, is that the data can be analysed to show exactly which drivers are having the biggest impact. For example, in the illustration above, resources and communications are obviously in need of attention but do resources and communications have the strongest link with the health and wellbeing factors? Once the data is available it is possible to run importance analysis to provide insight into which drivers are most closely linked to the outcomes. This second level of analysis then provides a further basis for action beyond the drivers that have low scores.

Cut the guesswork out of Good Days at Work

Robertson Cooper’s Good Day at Work survey is the only wellbeing survey to offer fully validated, research-backed questions that peel back the curtain on what helps and hinders Good Days at Work in organisations everywhere. Our core survey questions develop a full workforce diagnostic, examining resilience, pressure sources, engagement, mental and physical health, and psychological wellbeing.

Our team of skilled Business Psychologists and Analysts work with organisations to connect their results with outcomes that matter most to the business, including validated measures of productivity, intention to leave, advocacy and Good Days at Work.

And action starts as soon as each participant completes the survey, with detailed, bespoke wellbeing reports delivered to their inbox immediately.

If you’d like to find out more, get in touch to arrange a free, friendly chat with our wellbeing experts: hi@robertsoncooper.com.

Discover what's driving - and blocking - Good Days at Work for your employees