Making 360 Feedback Work
The 360 feedback survey – it’s been around for a long time, yet is rarely implemented well. Innovations emerge regularly, yet none seem to solve the core issues.
I’ve had the pleasure of debriefing over 100 people on their 360 feedback results in the last several months and have formulated some thoughts on the “do’s” and “don’ts” that can make the difference between a process that has incredible value and one that just causes upset – or worse, has no impact at all.
1. Select the survey tool carefully, based on relevant competencies and familiar language. Most traditional 360 tools offer a list of competencies from which to choose. Less is more, I say. Identify the ones that are meaningful for your organization and that are – or can be – incorporated into your performance management systems (or better, your reward/recognition programs) in some way. Make sure the terms are clear and easily understood – no one reads the definitions in the tool!
2. Select survey recipients carefully. The recipient of stakeholder feedback must be someone in whom the organization wants to invest development effort and resources. There’s no point in providing feedback if there won’t be support for action taken as a result. Ideally the individual will have been in their role for at least a year, so their stakeholders have enough experience with them to feel comfortable offering an opinion. And have a reason for doing the survey, other than “it’s time” or “everyone at that level is doing it.”
3. Educate the raters. This is one of the biggest gaps in most 360 survey processes I’ve seen. There are hundreds of different tools in common use, and all use different competencies, language and rating scales, many of which are not clear or easily interpreted. Encourage the raters to use the entire available scale (most just use the top half, rendering the midpoint the equivalent of the bottom of the scale and thereby reducing the nuance and range available). Tell the raters to offer comments as often as possible, but to ensure the comments do not cite specific incidents or names so anonymity is protected.
4. Provide debrief support. The results of a 360 survey are complex and multi-layered, in spite of appearing clear and simple. To hand someone their report and hope they’ll glean useful insight from it is a big risk to take. Designate someone objective – an internal HR or OD partner with expertise and experience, or an outside professional coach or assessment practitioner to review the results with the recipient and identify key messages and areas for development focus.
5. Encourage follow up conversations. I’m a big believer in total transparency – in all things, but in this case as it pertains to vulnerability, areas for development and role modeling. Thank your raters, tell them what you learned and what you’re going to do about it – and ask for their help.
1. Do not use a 360 survey instead of direct performance feedback. If there are performance issues, the individual needs to hear about them in private, in a direct conversation with their manager.
2. Do not encourage recipients to try to identify raters based on their verbatim comments. It can cause distrust about the confidentiality of the process, and pulls focus from the big picture messages offered by the aggregated feedback.
3. Do not deploy surveys in large batches into business units where raters will be asked to respond to several surveys in a short time frame. Rater fatigue will result in less attention and thought paid to each survey, diluting the potential benefit to the feedback recipient.
Bottom line – the 360 survey tool can be a very valuable tool to support the development of your organization’s talent, but requires thought, planning and investment to ensure it’s effective. Don’t cut corners on time, people or budget if you want to do it well, otherwise don’t bother at all.
If you have your own thoughts on how to make 360 feedback surveys as effective as possible, I’d love to hear from you in the comments below.