I received a few e-mails on my last post on performance feedback as informal learning. The questions concerned how to implement a performance feedback system of the type I described, so I thought I’d follow up with some implementation steps and alternatives.

Performance feedback is frequent, specific and objective information to individuals (or teams) regarding how well they are performing against job requirements/standards. A performance feedback system is a process for consistently and visually providing performance feedback to employees.

Here are the broad steps of what needs to be done to implement a performance feedback system. This is followed by two approaches on how to implement them

1. Identify the key job or role outputs.

Outputs (results, accomplishments) are the valued outcomes of our thinking and behaviour at work. (documents, decisions, designs, materials etc). They are the starting point for any effective performance feedback system. Identity outputs at the process, department, job or team level depending on the scope of your project. Most jobs will have 5-8 key outputs.

2. List the critical requirements for each output

Business processes transform job outputs into valued products or services. Therefore receivers or users of those outputs, are your best source to define what makes them useful or valuable. These “critical requirements” can usually be categorized into one of three types:

  • Quality (accuracy, ease of use, novelty, reliability etc)
  • Quantity (frequency, volume, rate, timeliness)
  • Cost (labour, materials, overhead)

Any output can be measured on all three variables, but it is only what is most important to the user/customer that should be measured. To determine importance ask:

  • Does actual performance typically vary on this measure?
  • If performance varies on this measure does it matter?
  • If it does vary is it large enough to require action?

Often higher level measures (unit, department, process) can help define what measures are important.

3. Define how each requirement will be measured

There are really only two ways to measurement critical requirements. Counting and Judging. Counting is easiest and most appropriate when requirements like volume, frequency and rate and accuracy (errors) matter most. It is common in manufacturing environments.

The output of knowledge and service oriented work will usually require measures of judgment as well. Judgment is essentially opinion or evaluation by comparing to a standard–ranking or rating an output based on perception or pre-established criteria like rating scales.

4. Define the target/goal for each measure

Without a goal or target performance the feedback will have no meaning and the visual display will have much less impact. Goals can be derived from a number of sources including the following:

  • Top performers: What levels of performance are achieved by top performers in the group?
  • Customer or user requirements: What level of performance does the user or customer (internal or external) of the output require?
  • Benchmark studies: What are best practices in the industry for similar jobs/roles?
5. Create a visual display of performance against the target over time.

Graphs are the best way to present performance feedback because they can communicate trends, provide an at-a-glace snapshot and can be maintained by the employees whose performance is being graphed. While there are many chart types to choose from the best and simplest are line graphs that chart performance over time (time series, run charts and control charts)

Encourage self monitored performance. When feedback is given to people individually or in small groups they can measure their own performance and enable feedback to be immediate.

Do not display individual performance graphs publicly. Individual graphs should be kept privately. Team graphs can and should be displayed publicly.

Some performance management software systems have charting capabilities but are often not useful for results driven performance feedback because they measure employee behaviour rather than job output or results. Charting software from process and quality improvement organizations will be more appropriate here.

Implementation options

The steps listed above can be implemented though group managers or directly with performance teams. In both cases a performance consultant will facilitate the process.

Manager implementation: The consultant works directly with a unit manager to complete the steps described. Outputs, requirements, measures and feedback display methods are determined by the consultant with input collected from the unit manager. The manager then rolls the program out with appropriate communications and support. Employees may or may be consulted through the process

Team implementation: The consultant works with the employee team, using facilitated education and design sessions to generate employee defined outputs, requirements, measures and feedback display methods. Employees own the results of the effort and are more inclined to support the implementation.

About the author

9 Responses
  1. Bill Sanders

    Thanks for this Tom. It’s similar to charting results in quality improvement or six sigma teams. I hadn’t thought of it in the context of learning before.

  2. Tom Gram

    Bill:
    Years ago i did quite a bit of quality and process improvement training and saw how powerful simple measurement and charting can be in improving performance. The byproduct of that process was always informal learning.

    The same ideas underlying performance consulting and organizational development approaches. Some of the best approaches for improving learning and performance (are they really different in the end?) are methods that combine quality improvement and organizational development.

  3. Hi Rick,
    yeah, i think many people think of a feedback system as some variation of performance appraisal, which is not what we’re getting at. A simple visual representation of performance (output, however defined) against a standard or goal brings a lot of clarity and focus to work.

  4. […] Feedback in the Workflow. Wonderful natural feedback exists in the form of business results and performance data. We don’t tend to think of it as a learning tool, but in the context of deliberate practice, it is one of the most powerful. It requires connecting the data to individual or team behavior. It is the cornerstone of approaches to team learning found in improvement methods like Lean, Six Sigma and performance technology. Here’s a post with some ideas on implementing a learning feedback system […]

  5. […] Feedback in the Workflow. Wonderful natural feedback exists in the form of business results and performance data. We don’t tend to think of it as a learning tool, but in the context of deliberate practice, it is one of the most powerful. It requires connecting the data to individual or team behaviour. It is the cornerstone of approaches to team learning found in improvement methods like Lean, Six Sigma and performance technology. Here’s a post with some ideas on implementing a learning feedback system […]

Leave a Reply

About the Blog

This blog contains perspectives on the issues that matter most in workplace learning and performance improvement.  It’s written by Tom Gram.

Subscribe to our mailing list

You’ll receive an email update when a new post is added to the blog. You can opt out at any time. We will protect the privacy of your personal information.

Recent Posts

The Learning Design Sprint
August 16, 2018
Practice and the Development of Expertise (Part 3)
August 9, 2018
Practice and the Development of Expertise (Part 2)
August 6, 2018
Practice and the Development of Expertise (Part 1)
August 5, 2018
Learning, Technology and the Future of Work
June 10, 2018

Popular Posts from the Archive

Here are some popular posts from Tom’s former blog, Performance X Design. Some older posts contain inactive links and unedited formatting while they wait impatiently for him to update them.