Back from Thanksgiving vacation and hope to stay on top of these reflections through the end of the year and into next year. Here are my thoughts on Clark’s blog on Kirkpatrick.
Quote: “Level 1 Reaction At reaction level one asks learners, usually through ‘happy sheets’ to comment on the adequacy of the training, the approach and perceived relevance. The goal at this stage is to simply identify glaring problems. It is not, to determine whether the training worked.“
Reflection: Interesting that the levels aren’t chronological.
Quote: “Level 2 Learning The learning level is more formal, requiring a pre- and post-test. This allows you to identify those who had existing knowledge, as well as those at the end who missed key learning points. It is designed to determine whether the learners actually acquired the identified knowledge and skills.“
Reflection: Intuitively, this level seems to be the most important for planning purposes
Quote: “Level 3 Behaviour At the behavioural level, you measure the transfer of the learning to the job. This may need a mix of questionnaires and interviews with the learners, their peers and their managers. Observation of the trainee on the job is also often necessary. It can include an immediate evaluation after the training and a follow-up after a couple of months.”
Reflection: I’m always a bit more leery when I see the word behavior and learning.
Quote: “Level 4 Results The results level looks at improvement in the organisation. This can take the form of a return on investment (ROI) evaluation. The costs, benefits and payback period are fully evaluated in relation to the training deliverables.”
Reflection: I tire a bit of the business definition of ROI. It doesn’t take into account many important intangibles.
Quote: “Kaufman has argued that it is merely another internal measure and that of there were a fifth level it should be external validation from clients, customers and society.”
Reflection: I agree with Kaufman.
Quote: “Traci Sitzmann’s meta-studies (68,245 trainees, 354 research reports) ask ‘Do satisfied students learn more than dissatisfied students?’ and ‘Are self-assessments of knowledge accurate?’ Self-assessment is only moderately related to learning. Self-assessment captures motivation and satisfaction, not actual knowledge levels. She recommends that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.”
Reflection: There are so many students that collect data like this. It never made sense to me why we are worrying so much about how people feel v. how much they learn. From personal experience, I know that how I feel is not always equal to how much I am learning.
Quote: “Learners can be happy and stupid. One can express satisfaction with a learning experience yet still have failed to learn.”
Reflection: Why do we care about the learner? There is much of higher ed which has turned to customer satisfaction as the most important metric. It is easy to move to this model because instructors both educate and evaluate what the learner has done. If we moved to a model where there are standardized, that problem could be addressed. But that may not be the best plan. Maybe we need to learn how to test relevance and longevity of knowledge.
Quote: “Learners often learn under duress, through failure or through experiences which, although difficult at the time, prove to be useful later. ”
Reflection: Resilience is important at every level. A learner with high motivation, effort, and character will accomplish much.
Quote: “Tests are often primitive and narrow, testing knowledge and facts, not real understanding and performance.”
Reflection: Completely agree that testing and evaluation is one of our biggest obstacles. How do we evaluate students authentically in such a way that instructors don’t spend most of time during a course evaluating student performance?
Quote: “…Level three data should take precedence over Level two data. However, this is complicated, time consuming and expensive and often requires the buy-in of line managers with no training background, as well as their time and effort. In practice it is highly relevant but usually ignored.”
Reflection: This makes my argument. And I’m not completely sure that education is the only answer. There are several complicated factors working together that affect organizations. Without a full analysis and unbiased perspective, it is difficult to determine the best way to handle training.
Quote: “In practice Level 4 is often ignored in favour of counting courses, attendance and pass marks.”
Reflection: I’m hoping that looking into Agile Learning Design that we will find a way to make training more meaningful.
Quote: “Kirkpatrick is the first to admit that there is no research or scientific background to his theory. This is not quite true, as it is clearly steeped in the behaviourism that was current when it was written.”
Reflection: I mentioned this earlier. Good to go through the process of thinking and reflecting to come to your own conclusions. Feeling like I’m following Clark’s line of reasoning when this happens.
Quote: “The Kirkpatrick model can therefore be seen as often irrelevant, costly, long-winded, and statistically weak. It rarely involves sampling, and both the collection and analysis of the data is crude and often not significant. As an over-engineered, 50 year old theory, it is badly in need of an overhaul (and not just by adding another Level).”
Reflection: Strong statement by Clark. He clearly lays out the argument to support his opinion.
Quote: “Evaluation should be done externally. The rewards to internal evaluators for producing a favourable evaluation report vastly outweigh the rewards for producing an unfavourable report. There are also lots of shorter, sharper and more relevant approaches…”
Reflection: Easier said than done. Is all training something that is standard from company to company? How is customization handled? Who will do the evaluation?