Structured Assessment Drives Effective Learning Design

By Elham Arabi, PhD .

Elham Arabi, PhD

Global Learning Engineer, Northeastern University

Learning Designer, Evaluator & Performance Improvement Expert Transforming People, Teams and Communities

Q: How did you arrive at your dissertation topic of using long-term memory principles in learning to drive higher quality instructional design? What led you to focus on this area?

A: It stems from a personal experience I had some years back in my career. I was required to complete certain training programs, which as a passionate learner I was excited for.

However, I soon realized the trainings didn’t provide the value I had hoped for. The lack of transfer of learning to the workplace really bothered me.

I started asking myself why this was happening. The evaluations showed I had met the learning objectives, yet that didn’t match reality. So I wondered – is it an issue with the design, the work environment, the evaluation? I wanted my dissertation to have even a small positive impact on making training more effective.

My initial thought was to study aviation or healthcare training, as inadequate training in those fields can have serious consequences. However, accessing data in aviation proved difficult. So I decided to focus on healthcare as an area where training design and evaluation clearly matter.

In my coursework I had studied assessment, so I knew I wanted to focus on training design and evaluation. Will Thalheimer’s Learning-Transfer Evaluation Model stood out as a new way to go beyond just ‘smile sheets’ and actually measure end results. So with his generosity, I was able to use his model for my dissertation to look at both design and evaluation of training through the lens of long-term memory principles.

Q: Can you talk a bit about your recommendation for the timing of delayed assessments after training? How many times should they be done and when?

A: This was an important question, as I was focusing on whether transfer of learning occurred. In my literature review of training transfer studies, I found most measured transfer through learner self-reports a month or more after training. However, self-reports alone are not diagnostic enough.

In consulting research by experts like Baldwin and Blume, the recommendation was a minimum delayed assessment at 3 weeks, up to 1 month after training completion. I decided on 1 month for an initial knowledge retention test. I also did a second delayed test at 2 months to see longer-term retention.

The delayed timing matters because of the forgetting curve – testing too soon after training doesn’t show true retention. The delayed tests focused on knowledge retention and decision-making ability based on the training content. This helped evaluate retention of key knowledge (Tier 4) and decisions to perform a task (Tier 5).

For measuring actual skill transfer on the job, I created an observation checklist for supervisors to assess learners’ competence. True transfer requires observing performance, not just self-efficacy ratings. The checklists provided clear descriptors to reduce assessment bias.

My recommendation is an initial delayed assessment at 1 month, with potential follow-ups, using knowledge tests and on the job observation. This rigorous approach takes effort but provides diagnostic data on true retention and transfer.

Q: Can you share why you included both qualitative and quantitative data in your research, and how does it help build a more robust picture of the results?

A: Focusing only on quantitative data (e.g. test scores) provides a limited picture. I felt that gathering qualitative data through observations and interviews would provide additional insights. Combining both quantitative and qualitative data, known as a mixed methods approach, can make a study more robust.

Quantitative data provides measurable results, while qualitative data captures insights through behaviors, attitudes, and perspectives. Together they provide a more complete understanding.

For example, I conducted observations during training sessions. I noticed many learners were disengaged and on their phones, even though the training was mandatory. This qualitative data revealed a potential issue with the relevance or delivery of the training.

The observations informed my understanding of the quantitative results. I was then able to recommend improvements based on a well-rounded set of data, rather than just test scores.

This practice-based research approach aimed to produce actionable steps for the organization, not just inform theoretical understanding. By taking a mixed methods approach, I collected a richer data set that told a more compelling story to drive change.

Q: How did demonstrating the effectiveness of your training approach help gain buy-in from stakeholders at the hospital?

A: Getting stakeholder buy-in is a key challenge in our field. The hospital leadership was open to improving training, but the subject matter experts were resistant since evaluations seemed positive.

In a hospital setting, there can be high levels of ego and reluctance to change established methods. So beyond data, I had to focus on relationship building and gentle persuasion.

Once I could demonstrate better training outcomes through measurements, it helped convince leadership this approach should expand. When they see concrete impact on performance, it’s compelling.

But the key was the long process of influencing the subject matter experts first, through empathy, guiding questions, and patience. I had to understand their perspective and focus on helping them apply learning principles in a non-threatening way.

This experience taught me a lot about human behavior and overcoming mindsets. Data is crucial, but pairing it with a human-centered approach is often what enables real change in organizations. Winning over people’s hearts as well as minds is the art of change leadership.

Q: You were able to influence the instructional development habits of the subject matter experts in your dissertation. Can you talk about how that learning happened and what changes you saw?

A: Typically subject matter experts are focused on transferring knowledge, not learning science. Long lectures are common as they don’t realize the limitations of passive learning. I had witnessed this in healthcare training too.

The first thing I learned was to avoid words like “improve” which can imply criticism of their work. I asked to review their materials, make tweaks, and get their feedback. I didn’t claim expertise but offered help.

Sharing research can sometimes backfire, if it makes them feel intimidated. So I took time to empathize and understand their perspective. They were dedicated experts trying to reduce errors.

Rather than telling them outright how to change, I asked questions from the learner’s view – “Will they remember all these steps?”, “What if they forget this back on the job?” This led them to ideas like job aids and scenario practice.

I also tidied up lengthy text-heavy slides to visibly improve them. Small changes showed I could add value. Patience and understanding, not claims of expertise, softened resistance.

Influencing subject matter experts requires empathy, asking guiding questions, and practical improvements. I avoided criticism and acted as a servant to their expertise. This gradually opened them to applying learning principles, without bruising egos. The key was viewing them as partners, not adversaries.

Q: What are your thoughts on the role of AI in learning and development?

A: I try to take a measured approach whenever there’s a frenzy about a new technology like AI. It’s easy to get caught up in the hype. I think we should critically evaluate how AI can truly enhance learning, not just follow the crowd.

I’ve seen some helpful uses, like chatbots that provide learner support 24/7. But I have concerns about overreliance on AI. It is essentially codifying collective human intelligence and work without citations or credit. There are intellectual property issues to consider.

AI could have a role as a thinking partner to collaborate with designers, like reviewing and improving content. But fully handing over instructional design to AI seems risky right now. If flawed data, research or practices get embedded in the algorithms, how can we control for that?

For assessments, it took me months of iteration with SMEs to create valid, skill-aligned questions. I’m skeptical AI could replicate that nuanced human process of assessment creation.

My view is we should keep an open but critical mindset. If AI allows us to automate rote work and focus on deeper thinking, there is value. But we need to vet its limitations and biases diligently. AI may complement human intelligence someday, but likely won’t replicate the empathy, ethics and reasoning our field demands.

Q: What else would you want to share with readers and others in the learning and development field about bringing research into practice?

A: I’ve been advocating for bringing more research into practice in our field. One gap is that when we get swamped with projects, we can stop thinking deeply about learners’ needs. We start focusing more on just producing courses quickly.

I make recommendations for how we can cultivate a scientific mindset – constantly observing, questioning, and evaluating if we’re really doing right by learners. Not just ticking off completed projects.

We have to be careful consumers of research – realize not all studies are robust or unbiased. But good research, combined with a curious, solutions-oriented approach can profoundly improve how we design and deliver learning.

Learning is the foundation of civilization when done right. By dedicating ourselves to continuous improvement through research and reflection, we can have an incredible impact on societies. I believe that wholeheartedly.

So, stay curious, lean on good research, and keep the focus on truly serving learner needs instead of only efficiency or compliance. This field can change lives when we approach it with care, science and heart.

If you could travel back in time and give your younger self career advice–what would that be?

A: The most important advice I would give my younger self starting out in this field is to stay humble and open-minded. Early in my career, with a new master’s degree, I mistakenly thought I was suddenly an expert that subject matter experts should just listen to. I had a mindset of “do it my way or I won’t work with you.”

But expertise takes years of experience, constant learning, and partnership with others to develop. Just having a degree or title doesn’t mean you have all the answers. Human beings and the learning process are complex. There is always more to understand.

I learned the hard way that an arrogant stance severely limits your ability to truly help learners and organizations. The best solutions come from collaborating with empathy and curiosity, not just insisting you know best.

So what I wish I could tell my younger self is: you have knowledge to share, but just as much to learn. Seek to understand first, not just be understood. Admit when you’re unsure. And check any tendency toward pride or stubbornness. A spirit of humility will carry you much further in your purpose of guiding others to learn and grow.

FOLLOW US