If you’ve been in the learning business for a while you’ve likely seen a few examples where learning initiatives have simply missed the mark. They didn’t produce the anticipated return on investment.  Planned performance improvement did not materialize. Learners didn’t develop the skills targeted by the program. Or if they did they certainly aren’t being applied on the job. Maybe all of the above.

Failures like these are more common than we like to think. Sometimes they’re not as visible as other more tangible areas of the business like manufacturing defects (Toyota’s sticky breaks) or financial misadventures (take your pick).  When substantial resources have been committed to well intentioned, hard to measure initiatives like training, sometimes success is declared when real evidence suggest otherwise (somehow I’m seeing George Bush on the deck of an aircraft carrier here). The phenomenon is not limited to formal training. Informal learning and I suspect a few recent social learning/social media efforts have met with limited success.

If you’re honest, some of those programs might have even been your own (I know a few of mine have had less than stellar impact).  Or you may have had a role in a larger effort.   Perhaps you designed it, or identified the original need, or delivered it, or managed the implementation effort. Maybe you created an impeccable design that was poorly implemented or visa versa. Learning can derail in many ways.

If we follow our own advice, there is a lot of learning to be had in these partial successes and outright failures. I’m doing a presentation soon titled When Learning Fails: Classic Mistakes and How to Avoid Them. I’m interested in your anecdotes and stories. Have you been involved in a learning program that did not go so well?  Have your observed any train wrecks or near misses?  What caused the issues? What was learned that helped improve your next initiative?

Please post any input in the comments section below or enter a link to a blog post or Tweet to #epiclearningfail. I’ll be reviewing  for the most common causes and learnings.   I’ll summarize in later posts.   Looking forward to your responses.

Update: here is the presentation from National CSTD conference Nov 12. 2012


About the author

4 Responses
  1. One of the first questions to ask during front-end analysis is “how are we going to know it was successful?” So classic mistake #0 is forgetting (a little or entirely) to write up the Lx (L1, 2, or 3) metric you’ll be measuring success against.

Leave a Reply

About the Blog

This blog contains perspectives on the issues that matter most in workplace learning and performance improvement.  It’s written by Tom Gram.

Subscribe to our mailing list

You’ll receive an email update when a new post is added to the blog. You can opt out at any time. We will protect the privacy of your personal information.

Recent Posts

The Learning Design Sprint
August 16, 2018
Practice and the Development of Expertise (Part 3)
August 9, 2018
Practice and the Development of Expertise (Part 2)
August 6, 2018
Practice and the Development of Expertise (Part 1)
August 5, 2018
Learning, Technology and the Future of Work
June 10, 2018

Popular Posts from the Archive

Here are some popular posts from Tom’s former blog, Performance X Design. Some older posts contain inactive links and unedited formatting while they wait impatiently for him to update them.