L&D measure: from lampposts to projectors

by Finn Patraic

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Find your L&D measurement projectors: Start with the commercial objective

In the previous article of this series, we explored the effect of reverbera through the old story of a drunk on the search for his key under the reverber instead of the place where he lost it. The measurement of L&D often has difficulties with its own reverbery effect: measure where they can instead of their position.

It is important to keep in mind that we measure and assess learning for different reasons. You may want to make a continuous improvement in your programs, prove compliance or measure the effect (including return on investment). Know your reason before starting to measure!

So how can we escape the fate of the street lamp in L & D? The first step is to change where we start our research. Instead of designing training, then asking later: “Okay, how to measure your impact?” Turn the script: start with the end to mind. Identify the result of the company you try to achieve and let it drive both the design of the training and the measurement plan.

This post contains affiliate links

Build a back data strategy

Starting with the commercial objective and working back may seem obvious for some, but it represents a major change. Surprisingly, less than 4% of companies say they design learning programs based on specific measures and defined in advance (1). The remaining 96%? Many create programs based on needs or requests received, provide training, and it is only to think of the evaluation (if applicable). By not measuring in the design phase, L&D teams have no way of measuring their efforts other than the very bases “, hence the depreciation of these easy post-hoc measurements (1).

Starting with the commercial objective means clarifying what success in organizational terms looks like. For example, if the company aims to reduce security incidents by 20%, it's your North Star. With that, you can work back:

  1. Who can reduce security incidents? Directly and indirectly? (You may have to choose your target audience with the most significant impact, because you cannot serve everything.)
  2. What behaviors should change to reach this 20%reduction?
  3. What employees (public) should these behaviors adopt?
  4. What currently prevents them (skills, knowledge, motivation, process problems)?
  5. It is only then then if the training is part of the solution, and if so, design the intervention to target these behaviors.
  6. Above all, you also identify key performance indicators (KPI) in advance – in this case, the safety incident rate – and plan to follow it. Your measurement approach may involve collecting basic security data, then comparing it after training (and perhaps with a control group or a trend line) to see if the needle moved. You can also plan observations or evaluations to be used to see if employees follow new safety procedures (a measure of direct behavior).

This approach is sometimes called “back design”. It guarantees that training is not a blow in the dark. In fact, this could reveal that the training is not at all the right solution. Perhaps the deep cause of the problem is a broken process, a lack of appropriate tools or an incentive system that rewards bad behavior. In these cases, the solution could be something outside of traditional training (for example, repairing the process or providing professional aid). Starting with the commercial objective and an analysis of in -depth needs, L&D can avoid wasting efforts on the training programs that light up light in the wrong place.

Alignment with the company

The new research of the for Talent Development association revealed that only 43% of talent development professionals say that their commercial and learning objectives are aligned (2).

When L & D makes this design aligned with the company, the measure becomes much simpler. You have set clear targets (KPIs or behavior changes) and collect data on these targets. You do not seek aimlessly; You have a card that points to the park where the keys have been lost, even if it is dark at the beginning.

Over time, this practice also strengthens credibility. Business managers see L&D focus on the results that managers care (for example, sales growth, quality improvement, reduction in turnover) rather than reporting the number of employees according to a course or consulted a resource. And when a training does not reach the desired result, it is an opportunity to learn and adapt, rather than a reason to hide behind vanity measures.

The measure should be learned what works and what does not work, not only to prove success. When L&D mainly focuses on what is happening after any learning event to guarantee the desired result, it goes from a cost center under the section to a strategic partner illuminating information on the data that companies can use to make decisions.

Managers and models to guide L&D measure: Kirkpatrick, King and Ltem

Fortunately, L&D professionals do not browse fully in darkness. There are models and executives for the assessment of training which act as panels (or perhaps different types of lanterns) to guide our measurement efforts (3). Three of the main levels are the four levels of Kirkpatrick, the King Phillips model and the learning transfer assessment model (LTEM). Everyone offers an objective on what to measure, and together they push us to go beyond easy measures.

The four levels of evaluation of Kirkpatrick are the best known and the most documented, so I will not spend time here. The challenge that I saw with the model lies in the practical implementation of learning in the workplace: L&D begins with level 1 assessment and is often found there. Even when you reach level 2 (learning), measurement is often a short -term reminder (or worse, memorization by heart during a course).

Jack Phillips, through the King Institute, added a level 5: King above the Kirkpatrick model. Did the king (return on investment) essentially ask: was the training worth it? The Phillips model consists in calculating the monetary advantages of the training and comparing them to costs, which gives a percentage of return on investment or a ratio (4). For example, if a leadership development program cost $ 100,000 and led to around $ 300,000 in productivity or improved sales, the return on investment would be 200%. It calls on leaders because he talks about the language of finance.

The calculation of the return on investment for each project can be delicate and sometimes controversial: isolate the effect of the training in terms of dollars implies certain hypotheses. Phillips recommends techniques such as converting money improvement measures and even asking participants to estimate the amount of improvement due to training (then to suppress optimism). The most important point to remember for me is that he points out that we are ultimately required from the results, not just activity. The King Institute also has TDRP as a standard set of measures library. Check it (5)!

Kirkpatrick and Phillips both put a key point: the assessment of the training is not complete until you have examined the impact on work and organization. Or put another way, has he changed behavior, and was that important for the company?

The learning transfer evaluation model

Over the past five years, I have implemented a more recent model, the learning transfer assessment model (6). LTEM was developed by Will Thalheimer in response to the gaps he saw in common measuring practices. It is an eight-level model that explicitly focuses on learning transfer, which means: do people really use what they have learned?

The lowest levels of LTEM (levels 1 and 2) cover things such as attendance and participation: have fundamentally showed or finished learning activity? For example, we have measured the commitment (defined as an accent extended on the task) to level 2 to 3 components: physical (what they do), emotional (how they feel or how they connected) and cognitive (how much they are challenged and thought). Level 3 is the perceptions of learners, just like Kirkpatrick Level 1, but with LTEM, we have implemented a new set of questions that are focused on performance and revolve around behavioral drivers (Mojo: as in motivation, opportunity, professional capacities and results).

The levels 4-6 examine what has been learned more substantially, from the simple retention of facts to the demonstration of skills in realistic scenarios (execution of tasks). However, these are often measured in a context of training (quiz, simulations) – important, but not yet in the real world. Level 7 is there that magic occurs: it measures the transfer of learning – do learners behave correctly at work (7)?

Behavior change does not occur by chance

Level 7 of the LTEM corresponds to a change in behavior at work, similar to level 3 of Kirkpatrick, but by emphasizing the direct evaluation of performance in the work environment. Finally, level 8 examines the effects of this improved performance on wider results – depending on the organizational impact, similar to level 4 of Kirkpatrick (and even beyond, with wavy effects on colleagues or customers).

One of the reasons why we have chosen the LTEM is its nuanced vision and its messages on what matters: it highlights the fact that the training value comes from what is happening after training. In addition to the designer design mentioned above, this model provides practical advice so that all L&D roles make the difference. More about this in the next article.

Isolate the impact of training: L&D measure

One of the best obstacles mentioned by the ATD survey is that L&D professionals believe that it is too difficult to isolate the impact of training. I agree. They are not mistaken. And this is why I strongly recommend not only measure but to design back solutions: starting with the commercial objective and the desired gain (or any other effect indirectly linked to key measures), support performance objectives, the public that can happen, then behaviors. If there is no behavior change, there is no impact.

Regardless of the model or measurement frame you use, the application of the company's rear chain will facilitate the insulation of the learning impact. But what about the lack of time, resources and expertise to do it on a large scale? In the next final article, we will see how AI can help and how the different roles L&D can benefit.

References:

(1) Measure the impact of learning

(2) ATD research: organizations find it difficult to measure the impact of training

(3) VS Framework model: understand how each of them works

(4) Return on investment methodology

(5) What role does TDRP play in the measurement space?

(6) Beyond Kirkpatrick: 3 approaches to assess Elearning

(7) Measure learning: ask the right questions

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.