Navigating Learning Effectiveness and Evaluation

By Robert Brinkerhoff, Ed D

Robert Brinkerhoff, Ed D

Senior Adviser and Head of Impact and Evaluation at Promote International

Author, keynote speaker, internationally recognized expert in learning effectiveness.

Q: With over two decades of experience in the field of learning effectiveness and evaluation, could you share some pivotal moments in your career that significantly shaped your approach to workplace learning?

A: Following my undergraduate college years, I spent five years during the Vietnam War era as an officer in the United states Navy. While I didn’t realize it at the time, since I had no idea what career I would be pursuing when I finished my Navy service, there were indeed several incidents and circumstances that shaped my later career in learning and development.Any military experience is fraught with training; most of the time in the military is in fact spent in training. But that doesn’t mean that all of that training that I experienced or was responsible for always worked.There was one key incident that I recall that very much influenced my later career, though i was blissfully unaware of it at the time. It’s only in reflection looking back across several decades that I realized the significance of what I was learning.

How 1 minute with a leader can undo 2 weeks of good training

Here’s the story: I was serving on a small remote island passive sonar Naval listening station deep in the West Indies, where our role was 24×7 tracking Russian nuclear submarines and reporting on their movements.At one point during that tour of duty, I was sent to Norfolk VA for a two week training course in messaging and communication, with a particular focus on a systematic way to handle what we called “flash” messages, which were messages where we knew we had located a Russian nuclear submarine and had to coordinate communications with other naval units in the air, on the surface and under the sea.I caught a military transport flight back to our island after completing the two-week course, driving my Jeep back from the dusty airstrip to our small base, As I entered the base I noticed that our commanding officer was doing some gardening at the gate entrance planting some flowers in pots. I stopped as I entered, leaned out of the jeep, to say hello.The conversation unfolded like this:Captain, recognizing me: “Aha, Brinkerhoff, how you doing? Haven’t seen you in a while.”Me: “That’s right sir, I’ve been up in Norfolk last couple weeks at Comm school.“Captain: “Oh, right, I remember, sure, Comm school. Right. Uh… How did it go?”Me: “It was great sir, a lot of good stuff, that I can see could be pretty helpful.”Captain: “Well, good, good. Glad to hear that. Probably a good course, right?”Me: “Yes sir, it was very good.”The captain then stands, brushes some sand from his trousers, and leans against the hood of my jeep, and looking skyward and waxing philosophic, says:“You know, Brinkerhoff, there are kind of two ways we can look at the world…”Me: “Yes, sir?”Captain: “Yeah, for example, there’s the way we do things here, and then there’s the way they do things up there in Norfolk.”Me: “Yes sir…?”Captain: “So – let me ask you this: Where are you now?”Me: “Um… well… I’m here sir.”Captain: “Right, you are indeed here. So now that you’re here, how are we going to do things?”Me: “The way we do them here sir?”Captain: “Exactly!” he states exuberantly. “We’re on the same page. Welcome back.”I continued on into the base, parked my Jeep and headed to my little cubicle in the bachelor officers’ quarters. I unpacked my gear, paying special attention to the four-inch thick notebook of exercises job aids et cetera, that I had brought back with me from the training. I then went to the burn bin, where we destroyed all copies of classified information, ran my notebook through the shredder and fed it into the burn pile.It was not until many years later as I completed my graduate degree and was working in corporate training and development, that I looked back on this event and it made so vividly clear to me the point that my manager was able to undo in a little less than 60 seconds, everything I had learned in my 2 week training course. Which was, by the way, an excellent training course and I had learned it well and was at the top of my class. But so goes the way of training when there is a lack of alignment and a lack of manager support.

Q: Could you elaborate on the evaluation methodologies you’ve developed or advocated for at the Brinkerhoff Evaluation Institute, particularly those that have proven most effective in measuring the impact of learning initiatives?

A: As I noted, after my Navy service, I used my GI bill benefits to complete a doctoral program in program evaluation at the University of Virginia. There, I learned all sorts of sophisticated and complex evaluation and research methodologies and statistical analysis procedures. Almost none of which, I must add, have much if any practical applicability to L&D practitioners.I also spent six years working to evaluate training and education programs for many U.S. government agencies in nearby Washington DC. After some considerable frustration about the lack of application of evaluation findings and outcomes, I changed my career focus to work in corporate learning and development. I spent sabbatical leaves at the L&D function in two major US corporations; the Upjohn company (now Pfizer pharmaceuticals), and the Kellogg Corporation, both nearby to my home in Kalamazoo MI.I then spent more than 20 years evaluating dozens and dozens of corporate training programs for companies and agencies around the world too numerous to mention.As my colleagues and I did this evaluation work we could not help but notice that any and all training programs always produced predictable results. In a nutshell, any corporate training program will have these predictable results:
  • Some people will use their training and use it well and achieve valuable work results;
  • Then there will always be some proportion of trainees who make no use at all of their training – for one reason or another they failed to apply any of it.
  • And then everyone else is in the middle. These are the many people who may pick up one or two things from the training, perhaps give them a try, but eventually give up and go back to doing things the way they used to do them before the training.?li>
As we studied these three consistently present groups of trainees, we learned over and over again the lesson that I recounted from my early Navy experience of a manager’s undoing training impact.

Training is one part of a behavior change strategy

From all this evaluation experience, we could not help but learn that the training itself is never the sole cause of any benefit to an organization. It is always factors in the context of the training that make or break impact. These are factors like manager engagement and support, feedback and coaching, opportunity to apply the training in work in a relatively safe and risk-free environment, alignment of the training with organization and work objectives and purposes, the timing of the training, the precision or lack of precision of who were selected for and who experienced the training, and so on. What we also could not help but learn is that these many factors can be managed that by paying more attention to these context factors. Managing them more successfully, we can dramatically change the rate of impact of training.

Simple, impactful evaluation

We teach people how to do impact evaluation simply and practically, and importantly–in a way that will help teach the organization how to do better the next time.Our methodology is relatively simple. We firmly believe that impact evaluation has become far too mystifying and complex, and that’s part of the reason why it’s largely avoided by many learning and development practitioners. This need not be the case. There is no mystery to doing good impact evaluation. It can be done simply and practically. It is not rocket science. It does not take a PhD in evaluation to do it.The key to demystifying impact evaluation is to come up with a practical and simple definition of what we mean by impact. Training produces value across a series of events in sequential chain. First, trainees have to learn something about how and why to do their work in an improved or different way. Then that learning has to be translated into action and behaviors on the job.

Measure what matters

Those actions and behaviors on the job must be employed in consequential circumstances –what we call “moments that matter”- applied in a way that if they were done effectively would help improve a result that the organization needs in order to achieve a larger organization or business goal.That application of the training, the behavior change, is the key factor in whether training will lead to value to the organization, or lead to failure. If there is no behavior change, there will be no subsequent value. There is a push in the profession, we know, to get so-called business impact data. This is all well and good, of course training should seek to influence business outcomes and worthwhile organizational results. But there’s a problem with using business impact data as an indicator of the success or lack of success of training.

Measure and adjust

Business results and organizational outcomes such as increased market share, more sales, reduced costs, improved retention, improved recruitment of talent, and so on are all worthwhile goals and training should aim to improve them. But waiting around to measure whether there is a change in these metrics, is a recipe for disaster. First, those longer range business outcome metrics are influenced by many more variables than what training can influence.So if, for example, we implement a sales improvement program, and notice that six months or 12 months later sales have improved, it may or may not have anything to do with the training. Chances are it has very little to do with the training.The most worthwhile measure to go after is the behavior data. We know that if there is no behavior change, we’ll never see the needle moving on any of those longer-range outcomes. And if we do see the needle moving, it can be a very misleading indicator of whether the training is working or not and will tell us nothing about whether the training worked, if it didn’t work why it didn’t work, or if it did work why it did work.We advocate for defining impact as behavior change, and build simple and practical methods for measuring that behavior change. The payoff is that if we don’t see the behavior change, we find that out in time to do something about it. We don’t wait until the horse has fled the barn to measure whether the door was open or not.The second thing we do that makes this practical and more simple approach to impact measurement, is that when we do see behavior change we always find out why. Why is the behavior changing? When the training is used, what helped get it used, when it wasn’t used, what got in the way? We know what those factors are likely to be and we search for them and then we correlate them against the degree of behavior change. This gives us very actionable and worthwhile information. We are able to draw lessons about if we want to see more behavior change and more impact, what do we need to do? What changes do we need to make to effect better behavioral outcomes?

Questions investigated

Our impact methodology addresses just a few key questions
  1. How often and how much is the training being used?
  2. When where and how is it being used, and what differences exist between where the training is being applied, and where it is not being applied?
  3. When the training is used, what good does it do? (If the answer to this question is that it doesn’t do any good, then we’re done and the company is on to another focus or goal for behavior change since this one is clearly not working.?
  4. When the training is not used why what gets in the way?
  5. And then the payoff question: Who needs to do what to get more people using their training as effectively as the people who are using it the most productively.
Pursue these questions, and you’ll be doing evaluation that pays off richly for improving the value of training and learning how to make training more effective in a continuous improvement cycle. And another benefit: you’ll have compelling success examples to show that when the training is used, it does things of worthwhile value. This data goes a long way for marketing the training, and building support and likelihood of more engagement, which will in turn, build more success. 
FOLLOW US