How do we measure the effectiveness of trainings/trainer? Rather how should we measure the effectiveness?

I will explore the question in an ITIL ® context here, though the impact of the question and its answer(s) are equally relevant for any trainings.

The effectiveness should measured through the outcome delivered by the training.

Of course it depends on the objective of the client and/or students – but generally the outcome can be measured through a couple of things

  • How much the training enabled the students to enhance their knowledge/skill which they are able to apply into practice.
  • If it is a certification based training, how much the training enabled the students to achieve certification

Here, the first parameter is difficult to measure (if you consider the last part of the measurement: “ able to apply into practice”) Here, there is a definite factor of capabilities, profile etc of each student.

So training organizations tend to measure that a step earlier – at the end of the training program, through training feedback. A good metric considering the constraints, but not the best obviously. Most definitely, the organizations where the students belong, would want to measure the later parameter – of enhancement of knowledge and enhance in their performance/value to the organization.

In this post, I want to focus a little more on the second measure: Success rate in certifications.  Most of the training organizations do use this, and I have seen more than a handful organizations where that is a major (and many-a-time,  ‘the major’) metric for trainers’ performance.

In an ideal context, this gives a perfect idea about the effectiveness of the training (like saying ‘under ideal testing conditions’). In the world we are in, I have a few concerns on the same, which need to be addressed by the training organizations, when they want to measure the effectiveness using the ‘success rate’.

Let me list some of those concerns and scenarios at the top of my mind:

Like I mentioned about the second parameter above, the success rate ultimately have a factor of capabilities, profile, level etc of each student – irrespective how how good/effective the trainer and training program itself might be.  

I have a recent experience wherein, I was supposed to train ITIL V3 foundation to a batch of students with experience of 5 years & above in IT. At the last moment, a couple of them had to drop out – and they put two very junior team members (almost freshers) as replacements (neither I nor the training coordinator were aware of such a change in their levels.) When the result came, almost all students got 80% and above.Among those two, one just managed to clear certification and other failed miserably.

Having said this, you can definitely use this parameter by setting a minimum target to  to measure training initiative – say a minimum of 80% or 85% success rate in all batches.

The problem starts the moment this parameter is used as a measure for comparing performance of trainers or used to measure a continual improvement of trainers/training programs – It can give misleading measurements!

And, even worse it triggers some very concerning practices like:

Trainers are forced or tempted to adopt ways for ensuring good and improving success rate (though it looks a bit difficult to improve on 100%! ;-))

Not very long ago,  a trainer whom I had interviewed spilled the beans at a later point that he used to show (as in projecting it the ‘actual exam questions’ on the screen) in the training program – That explained his ‘high success rate’ to me. His justification to that was : he was being measured only on that parameter in his previous organizations!

The greater surprise came to me  later, when I explained that incident to my team. One of my team member admitted that many other training organizations were doing similar things, if not the same – Including <a well-known training organization in the ITIL space>.

In such a scenario, you obviously get grossly misleading perspective about training effectiveness!

Again, to set the context right, I feel it is not enough (or even not right in many cases) to point fingers at such trainer practices. It is all about how the effectiveness is being measured.

(It is like this: if you measure your team’s or team members’ effectiveness on how promptly they come in and how many hours they put in, they will give you great results on those metrics by focusing on them – but your deliverables and quality might still suffer.Instead,  If you focus on deliverables/quality based metric, then the focus of the people will be on that, and will deliver better results to you.)

Another interesting and related experience I had recently – emphasize the facts about client perspective:

A client, at the end of negotiations for a training contract for his team asked me the question (more like an after thought): “Can you give us a 100% pass guarantee the students who will attend your ITIL foundation program?”

My answer was the obvious “no” – and frankly I even expected the client to go back to negotiation table and be picky. But, to my surprise, he laughed and appreciated it saying he would have had doubts on my training if I had said ‘Yes’!

This incident not only strengthened my concerns about using that parameter as the prime one by training organizations, but also on what clients expect and value from trainers and training programs.