How should I design a 360 feedback report?
The true value of a competency assessment comes in the subsequent reporting and development from an individual, team, and organisational level. Much of the success in reporting comes from the process of designing the report, enabling users to quickly and easily understand key areas of where they need to develop, and link them to relevant learning.
What to consider when designing my participant report
There are a few main factors to consider when designing a participant report. Functionally, they can be broken down into:
What is the purpose of the report?
Who will be debriefing the report?
How is the report being delivered?
What are the next steps?
Depending on the framework you have used to assess individuals, the purpose of your report can vary significantly from project to project. It may be that you’ve measure a leadership competency framework and want to develop behavioural areas to enable leaders to grow in their roles, and help them strive for new ones in your organisation. It may be that you’re measuring functional skills, and you want to assess whether or not people within your organisation are displaying the skills specifically required of them in their role. However you’ve designed your framework, your report should facilitate conversation based off insightful feedback received in the assessment component.
Will the report be debriefed by a trained coach or consultant? If not, who will be responsible for understanding the data and feedback that comes out of the report? Reports must be designed so that they are simple and easy to interpret. When designing your report, you must consider the medium in which it will be delivered, and adjust the level of detail and data you release to individuals and their debriefers. Identify what each report must give its target audience, and adjust the data so that it is relevant. Individual reports shouldn’t show high-level organisational data, just as business intelligence reports shouldn’t show individual statement scores. Mapping and making relevant the right bits of information will ensure the successful use of that information.
You should also take into account the feedback style of each reporting type. Individual reports may be targeted at personal areas of growth, therefore the report should be designed to highlight areas in which growth needs to occur. Team reports tend to be designed around the growth of a team, so averages and outliers should come more in to play. High-level business intelligence reports look at trends throughout the organisation, so filtration of data and comparisons of competency and score gaps should take centre-stage. As when designing the framework itself, we must identify the outcome before designing reports, to ensure maximum relevance and success.
Ultimately, the report should guide the user clearly through the feedback they have received, identify and highlight the areas they need to focus their development on, and direct them to the resources that are available for them to learn and develop from.
PDF reporting has long been the main output from a competency assessment. These tend to be many pages long, showing the granular scores of individual behavioural statements as well as key highlights from the assessment, and any verbatim commentary that has been capture from an individual report. While many people like to use PDF reports as they can write on the pages, and like having something in their hand to refer to. The reality with individual reports is that they’re very rarely re-used, and more likely than not will be at the back of a filing cabinet or a bin a few days later, never to be seen again.
PDF reports will always be available and popular, but in a greener workforce where printing 30 page documents for single-use isn’t advisable, and where desktop and mobile devices dominate the landscape of the way we work, reporting should be moved on from its traditional PDF format.
Interactive Online Reports
Online, interactive reports are replacing traditional PDF reports. The advantages of online reporting are its ability to be easily manipulated by the user, allow them to focus their debriefing and developmental actions on areas that are chosen and relevant to them, and provides, all-in-all, a much crisper user-experience.
Allowing users to manipulate their own data and see insights that are available to them, as well as create their own re-assessment and development actions off the back of an integrated online report, also means that rather than a one-off event, the process of competency assessment because an ongoing change management mindset.
Enhancing my reports mobile capabilities
The average iPhone user spends 3 hours and 15 minutes every day actively on their mobile phone. 80% of all emails are now opened on mobile phones. If phones aren’t used as a core part of your assessment, reporting, and development process, then you’re not utilising one of the main mediums of communication for your people.
Similar to the benefits of online interactive reporting, mobile phones can allow users to focus their feedback journey on areas of their selection and filter the information available to them. Further benefits of mobile phones are in essence, in the name “mobile”. The ability to access your assessment and report at any point, whether it be in between meetings, while you’re scrolling through emails on a lunch break, or even on the commute to and from work, is particularly valuable. As more companies adopt the mindset of flexible work, mobile phones become more and more critical to the successful implementation of talent and leadership programs and should be used accordingly.
Should I include benchmarks and norms?
It is very tempting to want to use external and global benchmarking and norms to compare our organisations against those of our peers and competitors in the world and our own industry. However, like all data sources, it is key to understand the data and bias that I am mapping against before making the comparison itself.
Global norms are quite often misused because organisations do not take into account cultural factors, nuances of competency frameworks, and indeed when the norms are taken from. Organisations are naturally different. The culture of one bank will be completely different to the culture of another, and while frameworks that they are being measured against might be very similar, the way that individuals respond to questions that are framed to them in a certain way is very idiosyncratic to that organisation.
Quite often when organisations compare to external benchmarks, the frameworks themselves are nuanced differently. While a set of competencies might be matching, and their statements relatively similar, small nuance and intricacy differences means that their interpretation can be wildly different, and effect data scores. The culture of work and way that we provide feedback is constantly evolving. If scores that are being used from a comparative perspective are from 10 years ago, they hold little to no relevance for how our organisational scores are rated.
Benchmarking, when used correctly and under full contextual understanding, is an incredibly helpful tool. However, it is far better to build up benchmarks over time, or indeed know exactly the context of the benchmarks you are comparing your information to, as opposed to using industry benchmarks without knowing the full detail.
Want to put your 360 feedback capabilities to the test? Click here to take a short survey, and see what is possible.