Saturday, December 22, 2012

Using Assessment Centres for Evaluating Potential – A Leap of Faith?

“She is very bright”, said the first HR Manager. “She is the definition of a tube light - bright on the outside and hollow inside”, said the second HR Manager. Two senior HR professionals, and two very different inferences – about the same employee! I heard this conversation a long time ago. It came back to me recently, when I was thinking about Assessment Centres – the effectiveness of Assessment Centres as a tool to evaluate the ‘potential’ of employees, to be more precise. Usually, comments on 'brightness' of an employee have more to do with the perceived 'potential' of the employee as opposed to his/her performance!

As I had mentioned earlier (See Paradox of Potential Assessment), the basic issue in potential assessment (which sometimes does not get enough attention) is 'potential for what?’ Many answers are possible here. They include

1. Potential to be effective in a particular job/position
2. Potential to be effective in a particular job family
3. Potential to be effective at a particular level (responsibility level)
4. Potential to take up leadership positions in the company
5. Potential to move up the organization ladder/levels in an accelerated timeframe etc.

Logically, the first four answers should lead to the creation of a capability framework that details the requirements (functional and behavioral competencies) to be effective in the job/job family/level/leadership positions that we are talking about. Once this is done, a competency based assessment centre is often used to assess the potential of employees against that framework. This is where the trouble begins (Actually, the problems start earlier than this – with the definition of ‘potential’ and with the creation of capability framework. But that is another story).

Let us begin by looking at a couple of basic issues. An assessment centre is essentially a simulation*. Hence, there are always questions on the extent to which the simulation matches reality (requirements of the job//level). This becomes even more problematic in the case of international assessment centres (for global roles/with participants from different countries) as the cultural differences needs to be factored in when designing the assessment centres and while interpreting/evaluating the behavior/responses of the participants (e.g. what is an effective response/acceptable behavior in one culture might not be so in other cultures).  

Since we need to avoid a situation where the participant was able to give the correct answer/response in the simulation because he/she knew the correct answer/response based on prior experience/knowledge and not because he/she was able to arrive at the correct answer by himself/herself in response to the situation(and hence demonstrating the competency) the simulations often use a context that is different from the immediate job/organization context – while trying to test the same underlying competencies required. This can bring additional complications in ensuring adequate match between simulation and reality. By the way, this is one of the factors (knowledge of the correct answer without knowing how to arrive at the correct answer) that can give to the ‘bright on the outside – hollow inside’ kind of situation mentioned at the beginning of this post. Another factor could be ‘sublimated careers’ (See Career Development & Sublimation).

Each of the tools/exercises in the assessment centre is designed to test a set of competencies. This implies that each of the participants should have sufficient opportunity to fully demonstrate all the relevant behaviors corresponding to all the competencies during the exercise. Assuming that there are 4 competencies (each with 3 relevant behavioral indicators) being tested in the particular exercise, it would mean each participant should have an opportunity to demonstrate 12 behaviors. If the evaluation on the behavior is done using a frequency scale (e.g. always, most of the times, sometimes, rarely etc.) it would imply the need to demonstrate each of the behaviors multiple times during the exercise (e.g. demonstrating the behavior 3 times will get the participant the highest rating) and that would imply a total of 36 behaviors. Of course, if this is a group exercise, this number will get multiplied by the number of participants (e.g. 36*6 = 216 behaviors for a group of 6 participants). This is practically impossible to do in a 45 minutes group exercise! Of course, exercises can be of longer duration and there can be more number of exercises (requiring fewer numbers of competencies/behaviors to be tested per exercise). However, considering the cost and time pressures in most organizations, this becomes difficult. This implies that the very design of the assessment centres might prevent the participants from fully demonstrating their competencies/potential during the centre – leading to artificially lower potential evaluations.

Now, let us come back to problems specific to using assessment centres as a tool to measure potential. Even in a best case scenario, what the assessment centre is measuring is the degree to which the employee/participant demonstrates the behaviors corresponding to requisite competencies during the assessment centre. So, at best it can give a good estimate of the current level of readiness of the employee for a particular role/level. However, this does not really indicate the potential of the employee to take up that role/reach that level in the organization hierarchy in the future. This is because the employee has the opportunity to learn/develop the competencies during the intervening period. Assessment centre can’t give any indication on the extent to which (and the speed at which) the employee will further develop/enhance the competencies.

Assessments centres are based on competency models. As I had mentioned in 'Competency frameworks - An intermediate stage?',  one of the basic assumptions behind developing a competency model is that there is one particular behavioral pattern that would lead to superior results in a particular job(i.e there is 'one best way' to do the job). This might not be a valid assumption, in the case of most of the non-routine jobs. If there are other ways to be effective in the job (say, based on a deep understanding of the context/great relationships with all the stakeholders), it can lead to 'successful on the job  but failed in the assessment centre' kind of scenarios. Of course, it can be argued that such individuals won't be successful if they are moved to a different geography and hence a low rating on their potential coming from the assessment centre is valid. However, it still does not negate the fact that they can be effective in that role/level in that particular context. Yes, this (producing results without possessing the specified competencies) can sometimes resemble the ‘bright on the outside –hollow inside’ kind of situation mentioned earlier. 

Another problem is that the results of the assessment centres are rarely conclusive - in the case of most the participants. What you get as the result of the assessment centre is a score on each of the competencies (say on a 5 point scale). Converting these scores into a ‘Yes or No’ decision on whether the employee has the potential to move into the role/level often involves a many inferential leaps (similar to the ‘leaps of faith’ mentioned in the title of this post). It is easy to string these scores together into some sort of a decision rule/algorithm (e.g.  If a participant has a score of 3 and above on 3 of the 5 competencies, and an average score of 3 overall, the answer is an ‘Yes’ etc.). Of course, we can do tricks like assigning different weights to the individual competencies and specifying minimum scores on some competencies and come up with a decision rule that appears to be very objective (or even profound!) and that gives a clear ‘Yes or No’ decision (on if the participant has the potential or not) . But the design/choice of the algorithm is more of an art than a science and it can be quite subjective and even arbitrary (unless the organization is willing to invest a lot of time and money in full-fledged validation study)!

So what does this mean? To me, assessment centre is a tool; a tool that has certain capabilities and certain limitations. The tool can be improved (if there is sufficient resource investment) to enhance the capabilities and reduce the limitations to some extent. But, some basic limitations will remain. Hence, if one is aware of the limitations and the capabilities, one can make an informed decision on whether it makes business sense to use this tool in particular context – depending on what one is trying to achieve and the organization constraints/boundary conditions. If you push me for being more specific, the best answer that I am capable of at this point is as follows - ‘It is valuable to use assessment centres as one of the inputs if the objective is just to assess the current level of readiness of the employee for a particular role/level. If the objective is to assess the potential of the employee to take up that role/reach that level in the organization hierarchy in the future, assessment centres are of limited value when the intervening time period is long – say anything above 2 years’!!!

*Note: Assessment Centres need not always be pure simulations. Tools like Behavioral Event Interview (BEI) are often used as part of assessment centres. However, it becomes difficult to use BEI in an assessment centre designed to test the employee’s potential for a higher level role. This is because employee might not have had enough opportunities (till that time in his/her career) to handle situations that require the higher order competencies (required for the higher level/role and hence being tested in the assessment centre). Hence she/he will be at a disadvantage when asked (during the BEI) to provide evidence of having handled situations/tasks that require the higher level competencies.
Any comments/suggestions?


Joseph George said...


This is a useful write-up for a conversation on Assessment Centers.
Allow me to tag along some notes to the theme, especially given your rhetorical title.

Without a consideration of predictive validity and method engineering, the mastery in use of the 'tool' as is described - is short of its inherent potential.

Several principles of science are to be understood commonly by users and designers of centers. Else, people will take ideological positions for the sake of argument alone.

There are underlying aspects of competencies too that need discussion. E.g. there are constructs like dysfunctional opposites or dynamic flippers that may get classified in competency systems. One may treat the data as a negative behavior, and another system may treat it as method induced stressors. Some treat competencies as behavioral evidence, while others see competencies as windows to a clinical side of personal preferences of the assessee.

Usually, the expanse of this field is better referenced between journal articles and empirical research papers on the subject. But putting such matters out there in the public space makes it possible to engage in reflection of deeper design and implementation intricacies.

To me the training of assessors is perhaps the most sensitive of dimensions, as it is a very discerning Lead Assessor who can make judgments of assessors as to how they align on how the center's design, related method custodianship, and final data integration will come together in the room.

Legal issues in this respect are a tad sensitive too, especially for global firms whose home nations would have formal legislation with respect to the use of results from center outcomes.

Thanks for the opportunity to share my own views on the matter.

Prasad Oommen Kurian said...

Thank you very much Joseph. I agree with your comments. Lack of congruence among the different behaviors in a competency framework (behaviors that are supposed to lead to superior performance in a particular job) is a common problem. I am all for integration of polar opposites (in the ‘integration of thesis and antithesis leading to synthesis’ sense) to respond effectively to challenges in a complex situation (a job, or even life in general). But, if one just specifies mutually conflicting/contradictory behaviors (without specifying the ‘integrated behavior’ that integrates the conflicting behavioral requirements) as part of the same competency framework
(the set of behaviors on which an employee needs to be evaluated in the assessment centre) this leads to a situation where high performance on one behavior necessarily leads to low performance on the other behavior. Since both these behaviors are supposed to be essential for success (the reason why they appear in the competency framework) this leads to confusion (or even to loss of confidence in the competency framework and in the assessment centres that are designed on the basis of the compency framework).

Anonymous said...

Organizations are now widely using assessment centres. Assessment centres are carried out by HR team or other experts and they are used to evaluate the candidates on base of their skills of communication, their aptitude and behavior. Organizations use a number of activities and tasks for assessing capabilities of a candidate. This article has helped me a lot in preparing for the tasks of assessment centres.
I assure you that if you follow these steps of preparation, it will prove out to be highly helpful for you.