Metrics and Models for Measuring the Success of Learning

By Jack J. Phillips, PhD

Jack J. Phillips, PhD

Chairman, ROI Institute

Talent development

Internationally renown measurement and evaluation expert, consultant, and author.

Q: What new metrics or methodologies are emerging to measure the success of initiatives to improve employee engagement?

A: First, I would like to point out some changes in measurement along the value chain. As you know, the success of any program follows a chain of value that goes through five levels:
Reaction, Learning, Application, Impact & ROI.

By the way, this logical flow dates back to the 1800s and was first brought to the learning community by Raymond Katzell in the 1950s.

At Level 1 Reaction, we see the net promoter score becoming a critical metric. Basically, it is asking if you would recommend this program to others, following a standard calculation for the net promoter score.

At Level 2, Learning, we see more reliance on self-assessment for learning, particularly when measuring at Level 3, Application is planned. This lets us spend less time on formal, structured learning processes. However, this data must be collected in a non-threatening, unbiased way.

For Level 3, Application, we see three standard measures evolving: (1) Extent of use, (2) Frequency of use, and (3) Success with use.

Also, engagement is changing. It has moved from its beginning as job satisfaction to where it is today and is sometimes labeled employee experience. Engagement skills have evolved, and they are driven, in part, by some prominent organizations, such as the Gallup group. Additionally, engagement has become very powerful, as it often correlates with retention, productivity, quality, customer satisfaction, and safety, among other measures.

Q: What trends are you seeing in how organizations quantify the impact of technology-enabled learning programs? Are there any new best practices emerging?

A: The impact of a learning program is the consequence of its application. The impacts are almost always in the work units in the organization and are standard measures in those work units. These are measured in the system and attract management’s attention. The measures don’t change just because we have a different way of learning. So, there is no change in those impacts.

The problem with technology-enabled learning is that the impacts aren’t usually improved to the same extent that in-person learning would deliver. We’ve tried to change that situation in our most recent book, Designing Virtual Learning for Application and Impact: 50 Techniques to Ensure Results (ATD, 2023), which presents best practices for ensuring that learning drives the identified impact measures.

Q: What changes are happening in how companies isolate the effects of real-world training programs versus other factors? Are there any techniques being utilized?

A: It is becoming clear that more programs must be pushed to the impact level, driven in part by the request from the top executives. And when you connect programs to the impact, it is essential to sort out the program’s effects on those impacts. Without this step, there is no credibility. Therefore, we see it being done routinely. That’s one of our standards: You have to isolate the effects of your program.

There has not been much change in the use of experimental versus control groups, as shown in our benchmarking, for over two decades. However, there has been an uptick in mathematical modeling, trend line analysis, and the use of estimates. For estimates to be credible, they must come from the most credible sources, collected in a non-threatening, unbiased way, and adjusted for the error in the estimate.

Resource Available: For someone needing a reference on these techniques, let us know and we can provide a guide.

Q: Are there any new types of training/measurement initiatives that you see are becoming more popular or effective for improving engagement and performance?

A: We see soft skills and mental wellbeing growing in popularity, particularly since the pandemic. We see more focus on empathy, innovation, and inclusiveness. Also, we are seeing a tremendous emphasis on culture, ensuring engagement and culture work together. Finally, because we have experienced so many changes, we see much focus on change management.

Q: Have you noticed any shifts in how quickly organizations are able to see results from engagement or training programs compared to the past?

A: We see that we are measuring quicker and that the timing of measurements depends on the program. We are getting away from following up at three months and six months because we realize at Level 3, that’s often a matter of weeks instead of months. If the individuals don’t use what they’ve learned quickly, it’s lost, and the time it takes for systematic, routine use of a skill set depends on the complexity of the skill and the opportunity to use it. This usually makes data available quickly. Sometimes, the impact data occurs quickly as well. At other times, there is some delay. The key is to collect data for impact as quickly as the impact has occurred.

Q: What evolving challenges are there in measuring the success of these types of initiatives? How are organizations adapting?

A: It’s important to be sensible when we measure success at Levels 4 and 5, particularly for Level 5. We suggest measuring only 5-10% of programs at the ROI Level. This keeps the resources to a minimum yet addresses the executives’ concerns of measuring ROI for the important, expensive, and strategic initiatives. Also, it’s critical to be proactive with this issue and not wait until a request for impact and ROI analysis. When you wait for the request, you have a short timeline to provide data and may be unable to do that. You are now on the defense; you need to be on the offense. ROI is on someone else’s agenda, and you want to keep it on your agenda.

Q. What advice are you giving organizations today on focusing their measurement on training and engagement initiatives that is different than in the past.

A: We need to get away from focusing so many resources on Level 1 Reaction and Level 2 Learning and instead measure Level 3 Application and Level 4 Impact at the same time. Those two levels go together just as Levels 1 and 2 go together. Impact doesn’t occur without application, and impact often influences more application. Occasionally, some evaluations should go to Level 5 ROI, as mentioned earlier, particularly those that are expensive, solving some major organizational problem, tackling major opportunities, or connected to strategy.

The problem is that we have under-invested in the measurement and evaluation. Most organizations that are just beginning the journey for serious evaluation are spending only about 1% of the budget on measurement and evaluation. We suggest allocating about 3 to 5%. It’s difficult to ask for more money for this effort. You have to show the value of doing more of this level of evaluation and make the case for more resources.

Q: How should organizations adjust the metrics and targets for engagement and training programs to align with current best practices?

A: Be aware of someone offering specific metrics at the impact level for particular programs. The metrics are very specific to the program and the organization. For example, when someone says that leadership development always drives retention, we would say, “Not necessarily.” If retention is not a problem, leadership development is not likely to change it.

It is best to start with the end in mind. If you want to end up with impact, start with the impact. That’s how our ROI Methodology process model is constructed. With the impact in mind, you essentially design for the success you need, ensuring that you deliver the impact in the end. Finding some best practices for the engagement measures may be possible as they have been evolving and changing in the industry. Make sure they are coming from a reputable organization.

Avoid terms like Return on Learning (ROL), Return on Training (ROT), Return on Education (ROE), or Return on Expectations (ROE). These often confuse executives and stir up a chuckle from the finance and accounting team. Use ROI as it’s defined in the finance and accounting literature.

Regarding best practices throughout the process, we recommend working with individuals with much experience with serious evaluation. Our methodology, the ROI Methodology, has now become the most used evaluation system in the world. We benchmark the best practices with users in this field, focusing on those within one to five years of implementation.

Resource Available: We can share a benchmarking report with your audiences if interested.

Q: Overall, what new directions and innovations are you seeing in how companies evaluate their learning engagement initiatives compared to prior years?

A: When you analyze what drives results, it’s a team effort. All the stakeholders in the L&D function are an essential part of it. The designers, developers, facilitators, and owners have to do their share to make this work. The burden of measurement and evaluation should not just be on the evaluation team. If that happens, you will have a huge measurement and evaluation unit with dozens of people involved. You don’t really need that; you need to share the responsibility with others.

Additionally, we need to be smart with technology to ensure it assists us, doesn’t cost too much, and actually delivers results. We should use artificial intelligence when it works and helps us. We need to overcome the fear of negative results. This fear is the number one barrier, but organizations are realizing that if a program is not working, the L&D team needs to know what happened. In fact, someone will know it’s not working, so the L&D team must find out for sure and correct the problem. Process improvement is the key.

FOLLOW US