In a recent article in CLO magazine Dan Pontefract questioned the value of traditional training evaluation and the Kirkpatrick approach in particular (article re-posted here).  The article raised the ire of the Kirkpatrick organization and Dan responded in a follow-up post .  Others had observations on the post  (see  Don Clark and Harold Jarche.) I’ve been involved in many evaluation efforts over the years, both useful and ill-advised, and have some thoughts to impose on you.

To summarize the positions I’ll paraphrase Dan and Wendy Kirkpatrick  (probably incorrectly but this debate happens so often I’m using Dan and Wendy more as archetypal voices for both sides of the argument)

Dan: Learning is a continuous, connected and collaborative process.  It is part formal, part informal and part social.  Current evaluations methods are dated, focused only on formal learning events, and need to be tossed.   (He doesn’t say it but I think he would place less importance on evaluation in the growing world of social learning)

Wendy (Kirkpatrick): Formal training is the foundation of performance and results.  It must be evaluated in measurable terms. Clearly defined results will increase the likelihood that resources will be most effectively and efficiently used to accomplish the mission.  (She doesn’t say it but I think she would suggest social learning, when considered at all, is simply in a supporting role to formal training.)

On the surface it sounds like they couldn’t be more polarized, like much of the current debate regarding formal vs. informal learning. Here are some thoughts that might help find some common ground (which, I’ll admit, isn’t as much fun as continuing to polarize the issue).

Confusing Training and Learning muddies the purpose of evaluation

In the last 10 years or so we’ve moved away from the language of training and instruction, with it’s prescriptive and objectivist underpinnings (boo!) to the softer language of learning, most recently of the social variety (yea!).  Most “training” departments changed their moniker to “learning” departments to imply all the good stuff, but offer essentially the same set of (mostly formal) learning services.  Learning is the new training and this has confused our views of evaluation.

Learning (as I’m sure both Dan and Wendy would agree) truly is something we do every day, consciously, unconsciously, forever and ever, amen.  We are hard wired to learn by  adopting a goal, taking actions to accomplish the goal (making a decision, executing a task, etc) and then making adjustments based the results of our actions.  We refine these actions over time with further feedback until we are skilled or expert in a domain. This is learning.

Training is our invention to speed up this learning process by taking advantage of what has already been learned and freeing people from repeating the errors of others.   In business fast is good.  Training, at least in theory, is the fast route to skilled performance versus the slow route of personal trial and error.  It works very well for some tasks (routine) and less well for others (knowledge work and management development).   Ironically, by stealing training from the hands of managers and from early mentor/apprenticeship approaches we may have stolen its soul (but I digress).

In any case, like it or not, in an organizational setting, training and learning are both means to an end–individual and organizational performance.  And performance provides a better filter to make decisions about evaluating than a focus on training/learning.

Should we evaluate training?

If it’s worth the considerable cost to create and deliver training programs it’s worth knowing if they are working,  even (maybe especially) when the answer is no.  With growing emphasis on accountability it hard to justify anything else.  Any business unit, Training/Learning included, needs to be accountable for effective and efficient delivery of its services.

The Kirkpatrick Framework (among others) provides a rational process for doing that but we get overzealous in the application of the four levels.  In the end, it’s only the last level that really matters (performance impact) and that is the level we least persue.   And I don’t know about you, but I’ve rarely been asked for proof that a program is working.  Senior management operates on judgment and best available data for decision making far more than any rigorous analysis.  When we can point to evidence and linkages in performance terms that our training programs are working that’s all we usually need.  I prefer Robert Brinkerhoff’s Success Case Method for  identifying evidenceof training success (vs. statistical proof ) and for using the results of the evaluation for continuous improvement.

Unlike Dan, I’m happy to hear the Kirkpatrick crew has updated their approach to be used in reverse as a planning tool.  It’s not a new innovation however. It’s been a foundation of good training planning for years.  It puts the emphasis on proactively forecasting the effectiveness of a training initiative rather than evaluating it in the rear view mirror.

Should we evaluate social learning?

It gets slippery here, but stay with me.  If we define learning as I did above,  and as as many people do when discussing social learning, then I think it’s folly to even attempt Kirkpatrick style evaluation.  When learning is integrated with work, lubricated by the conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements.   Learning in the broadest sense is simply the human activity carried out in the achievement of performance goals.  Improved performance is the best evidence of team learning.  This chart from Marvin Weisbord’s Productive Workplaces: Organizing and Managing for Dignity, Meaning and Community illustrates the idea nicely:


In his post Dan suggests some measures for social learning:

“Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism”

Interesting as it may be to collect this information, they are all measures of activity reminiscent of the type of detailed activity data gathered by Learning Management Systems.  Better I think to implement social learning interventions and observe how it impacts standard business results.  Social Learning is simply natural human behavior that we happen to have a very intense microscope on at the moment.  To evaluate and measure it would suck dry it’s very human elements.

Evaluation should inform decision-making

Evaluation is meant to inform decisions. We should measure what we can and use it in ways that it doesn’t bias what we can’t.   The Kirkpatrick approach (and others that have expanded on it over the years), have provided a decent framework to think about what we should expect from training and other informal learning interventions.

However, myopic and overly rigorous measurement can drive out judgment and cause us to start measuring trees and forget about the forest.   Thinking about organizational learning as a continuum of possible interventions rather that the abstract dichotomy between formal and informal  learning will help us better decide appropriate evaluation strategies matched to the situation.  Whew! Maybe we need to evaluate the effectiveness of evaluation 🙂

About the author

11 Responses
  1. Love it Holly. The never ending discussion on how, what and when to measure training, learning and development. While I understand the “perpetual 360” concept Dan noted, 360’s are as subjective as they are objective. They have a role that can be valuable for development planning purposes but they are not great indicators of learning.

    Training in the sense that training is meant to teach Employee X how to use Excel or assemble product Y can be measured fairly well but iterative learning needs to be accepted as important though not accurately measured. I will go to the end of the diving board here and say that other than training as defined above, most learning is in fact iterative and therefore any purported measures will be difficult to link directly to bottom line factors. On the other hand, it would be a weak argument to suggest that such learning is not valuable to the future of the organization, something I believe most executives would agree with.

    1. Hi Karin;
      Thanks for the comment. Never ending indeed. In my experience way more energy goes into discussing evaluation than doing it. I’m with you on your point re: iterative learning…all training, except for the most basic of skills, is iterative. At best, training develops fledgling skills which are then refined and developed into expertise though application on the job and the feedback it provides (through people, systems and consequences). This process is learning as i defined it in the post. Very few training programs, no matter how brilliantly designed should expect to develop fully formed skills right out of the gate. We need a longer term view of the impact of training.

    1. Hi Don:
      Just read your post. Interesting. I think Kirkpatrick is fine for social learning when it is used to support a formal learning program, but I’m not convinced it makes sense for a more open implementation of social media for integrating learning and work…say something like a community of practice. Take a look at the success case method (if you’re not already familiar with it) I think it offers a nice alternative to Kirkpatrick for both formal and informal learning.

  2. Hi Tom,
    I’m not so sure that it cannot be used. For example, if I replaced “After Action Review” in my post with “Community of Practice,” I believe a viable CoP would be implemented. However, that is not to say it is the only way as other evaluation methods can still be used, such as the success case method.

  3. […] Tom Gram says when learning is integrated with work, nurtured by conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements for the achievement of (team) performance goals. He says that improved performance is the best evidence of team learning. […]

Leave a Reply to Evaluating with the Success Case Method « Performance X Design Cancel Reply

About the Blog

This blog contains perspectives on the issues that matter most in workplace learning and performance improvement.  It’s written by Tom Gram.

Subscribe to our mailing list

You’ll receive an email update when a new post is added to the blog. You can opt out at any time. We will protect the privacy of your personal information.

Recent Posts

The Learning Design Sprint
August 16, 2018
Practice and the Development of Expertise (Part 3)
August 9, 2018
Practice and the Development of Expertise (Part 2)
August 6, 2018
Practice and the Development of Expertise (Part 1)
August 5, 2018
Learning, Technology and the Future of Work
June 10, 2018

Popular Posts from the Archive

Here are some popular posts from Tom’s former blog, Performance X Design. Some older posts contain inactive links and unedited formatting while they wait impatiently for him to update them.