Course completion rates, measures of time spent learning, and learner perception are among the most common methods to measure learning effectiveness. However, none of these measures provides any meaningful insight into learner competency, behavioural changes in learners, or the organizational impact of learning activities.
The most effective measures of learning impact track a learner’s proclivity to actually “do,” or apply new competencies in a real-world scenario. Competency-based assessments, simulations, and long-term behavioural metrics can create a more vivid picture of learning effectiveness in both lifelong and online ed programs.
Traditional Measurements of Learning Miss the Mark
Learning & Development (L&D) professionals are often called upon by senior leadership to measure learning effectiveness. Reports on the success of learning can be used to justify existing investments into L&D content or eLearning technologies. It can also be an opportunity to preserve L&D funding when budget cuts are being made. When L&D professionals were surveyed recently on the methods they use to measure the effectiveness of learning in a Will Thalheimer study, the results were surprising.
Over 80 per cent of L&D pros measure learner attendance at training or program completion rates. Learner perceptions of training content, such as self-reported satisfaction surveys, are a primary efficacy measure for 70% of learning program administrators. These vanity metrics may satisfy an executive board’s desire for proof of learning impact.
In some cases, these measurement methods tell a story about risk or return-on-investment (ROI). Course completion rates can satisfy regulatory requirements for training activities. Growth in learner enrollment can signify strong adoption of eLearning technology investments and possible ROI. However, these measures are missing the mark on what actually matters — assessing learning effectiveness.
Global L&D administrators have almost universally encountered the Kirkpatrick Model, a framework for assessing learning effectiveness across four levels of learner engagement:
- Level 1. Reaction: Did learners consider training relevant and useful?
- Level 2. Learning: Did learners achieve stated training goals?
- Level 3: Behavior: Did learners apply training?
- Level 4: Results: Did training support desired strategic outcomes, such as increased productivity?
In order to assess the impact of learning beyond the first level of the Kirkpatrick Model, learning activities need to be built around assessment. Online learning and lifelong learning activities should be connected to clear competency outcomes to measure the learner’s progress toward shared training and behavioral goals, or the connection between learning and strategic outcomes.
Competency-Based Assessments are a Meaningful Measure of Learning
‘Whenever possible, competency-based assessment must do more than just measure what a student knows,” writes Rebecca Klein Collins. Assessments of individual learning should determine whether a learner can ‘do,” by using the right behaviors in a real-world situation. Multiple-choice tests can measure short-term information retention instead of a learner’s propensity to apply new competencies in an irregular, complicated situation.
Scenario-based learning, simulations, and tasks can prepare learners for behavioral change. Frequent, competency-based assessment can also provide L&D with better data to measure individual, team, and program effectiveness.
Track Behavior and Results, Not Just “Abilities”
“Ability should be a red-flag word in educational discourse,” writes Clifford Adelman. “At best [it] indicates only abstract potential…one doesn’t know if a student has the ‘ability’ or ‘capacity’ to do something until the student actually does it.”
Learning and behaviour are better measures of learning effectiveness than surveying for reactions. Adelman’s argument also makes a case for measuring the long-term results of learning to understand the impact. Create clear strategic metrics for learning activities, such as better customer satisfaction or lower error rates. Long-term results can show the true impact of learning on the organization and tell this story in terms that resonate among senior leaders, like hard cost savings or productivity gains.
Conclusion: Learning Impact Should Be Measured Against Clear Benchmarks
A traditional learning model uses a fixed amount of time in training to achieve variable results, per Klein Collins. In contrast, a competency-based model can use varying amounts of time or learning activities to help learners achieve fixed outcomes. Adopting more effective measures to assess the impact of learning on the individual, team, and organization can create better individual progress towards competency-based outcomes and a more vivid picture of learning’s value to the organization.