Nomination Process – Pre-nominated or nominate your own?
When launching your assessment process, consider whether you want each individual to nominate their own raters, or whether or not you want to pre-nominate the raters for that individual. The thought process behind pre-nomination is that it takes the ambiguous nature of nominating people who will only give good feedback out of the hands of the raters, but this is a time-consuming task from an administrative perspective, and adds extra complications to your assessment rollout if you get certain bits of information, such as email, wrong. This however can be countered through communicating and clearly outlining how each rater should be nominated.
Another key aspect to consider is whether or not you have a manager approval process, whereby once an individual nominate their raters, a manager logs in to approve or deny their selections. If a manager is required to review and approve these so called ‘raters’ chosen by their team members, it often creates a bottleneck in the overall process and requires managers to action this task before anyone else can participate. From our experience, this can take away the sense of trust in the process.
After you have considered the intricacies on the assessment, it’s time to launch. Giving the users access to a platform that has a great user-experience, is quick, clean and easy to complete, will ensure the success of your assessment rollout. As users log in to their access via the process that you have set up for them, they must have the ability to interact to a GDPR consent form and be guided through the different steps of the assessment process with ease and speed.
Timeline of events
Once you’ve launched your assessment, it’s key to be aware how long you need leave it open for participants to respond. The ideal timeline for this is between 2-3 weeks. This time period allows individuals a few days to complete any nomination process you have in place and complete the assessment themselves.
It is far more difficult to get raters to respond to an assessment than the individual themselves. To counter this, you should have a reminder email process set up. Ideally, your reminder emails should be coming through at a frequency of 5 days at the start of the assessment window, every 3 days as the window progresses, and daily at the end of the assessment window, to ensure as many raters provide feedback as possible. Outlining that reminder emails will cease once the assessment is complete is a very helpful incentive for people to complete their responses!
Whilst reminder emails are an effective method of ensuring raters complete their feedback process, you as a project administrator may want to track and customise the process of sending through reminder emails and checking the status of responses. The best way to keep track of this is through an online status report. Ideally, this should give you information of whether raters have been nominated, how many have responded, and all other administrative details around timing and completion of the assessment. Ideally, an online report like this will also make it easy for you to close the assessment window, and release reports to the relevant individuals.
Your Administration Process
In comparison to the pure administrative effort that goes into the successful build, launch and completion of a competency assessment, the assessment itself only makes up a small part of a wider talent initiative. Not only is the process arduous and time-consuming, but small errors at any point of the administration can lead to what is perceived as major errors by the audience. Mountains are made out of molehills, and trust can very quickly disintegrate.
The ideal scenario is to outsource your administrative effort to your external partner. As well as taking the workload off your hands, they should be able to guide you through a best practice approach in the launch of your programs, and run the assessment with inerrancy and accuracy.
Should I run assessment all at once, or in a staggered approach?
Imagine a scenario where 100 individuals are going through a competency assessment. They might have 5 peers, 5 direct reports, and their manager providing feedback. While we would be tempted to think that it is simply a group of 100 people, each of those individuals has 11 other raters giving feedback on them. That group of 100 people, in turn, has over 1000 people going through the feedback process. Additionally, some of those individuals may cross-over, and have to respond to a feedback assessment on not only themselves, but also provide feedback and multiple other people.
If you have larger groups of people going through an assessment process, it may be worth considering staggering the rollout, dependent on the resources you have allocated, and whether or not individuals going through the process will experience survey fatigue.
Is this project across multiple languages?
With increased frequency in a globalised workforce, assessments are being translated and completed across multiple languages. This can be incredibly effective and give your competency assessment wider scope, however there are some critical factors that you need to consider: is the framework translated, and who by? Are your communications translated? Have you considered cultural differences in how people give feedback?
Nuance in language is integral to the success of a competency assessment, and the development of frameworks is more often than no based on the culture of a segment of the business. In translating the assessment piece and the communications around it, it is crucial that whoever is translating this process understands these cultural idiosyncrasies and communicates the assessment in a way that is understood by the target audience. Similarly, different cultures provide feedback in a different way, and understanding these cultural factors will change the way that feedback is not only given but interpreted from a reporting perspective.