Essential Analytics for the Future of Learning

By Patti P. Phillips, Ph.D., and Jack J. Phillips, Ph.D.

Analytics measure success and serve as a basis for process improvement. The key is to define the metrics in categories or levels.

Deciding which analytics are needed has always been challenging in the learning and development field. Analytics measure success and serve as a basis for process improvement. The key is to define the metrics in categories or levels. The value chain, describing learning success at six levels, is the most accepted framework for categories of learning analytics. Regardless of where, how or when learning is provided in the future, certain metrics along this value chain will be important to capture to know and show the success of learning.

The Value Chain

The first level is Level 0, Input, which measures volume, cost, and time for the learning program. The first level of outcome is Level 1, Reaction, which defines how participants and others see value in the program. Level 2, Learning, measures the knowledge or skills gained from the program. And Level 3, Application, measures how participants are applying the new knowledge or skills in their environment. Application leads to Level 4, Impact. Every application should have a consequence, and that is the impact. The program’s impact is what the sponsors, funders, and executives would like to see. The last and ultimate level of accountability is Level 5, ROI, which compares the monetary benefits to the program’s costs.

The important thing about these levels is that the previous level’s success is necessary for the next level’s success. For example, you won’t have a positive ROI if there is no impact, and there won’t be an impact if there is no application. There is no application if there is no learning. Also, learning and reaction levels often influence each other. The amount of learning will affect reaction, and the amount of reaction can affect learning. Similarly, for application and impact, the amount of application will affect the impact, and the amount of impact can influence application.

It doesn’t matter where, when or how the learning is delivered, these levels of data are present and define the success of learning. The challenge is to select an appropriate mix of metrics at each level that can lead to the success desired by all parties involved, particularly those who support, fund and sponsor L&D. The analytics must also provide enough data for process improvement during and after the program. Let’s examine possibilities in each of these levels.

Level 0, Input

Input is important because it defines who is involved in the program. The most common metrics are the number and profile of those involved in the program. Another metric is the time participants are involved. Still, another metric is the cost of the learning. While this measure is necessary if you evaluate at the ROI level, it is sometimes needed just to see the efficiency of one approach compared to another. The key is to include all the costs โ€” not just the direct cost but the indirect cost for needs assessment, design, development, implementation, facilitating, managing the process, and even the follow-up evaluation. Input is essential but does not represent outcomes.

Level 1, Reaction

Reaction is how the participants (and others involved in the program) see value in the program. The metrics should focus on content-related issues and not on the experience. This is not to suggest that experience is not important, but it is the content that will deliver the value later. If participants have a bad experience, it is usually adjusted early in the process. Figure 1 shows some recommended reaction metrics. The items with the asterisk indicate a significant positive correlation between reaction and application (extent of use).

Figure 1. Typical reaction metricsย 

At the end of the program, participants should rate each of the following statements at least a 4 out of 5 on a 5-point scale:
  1. The program is relevant to my (our) situation.*
  2. The facilitators/organizers were effective.
  3. The program is valuable to this mission, cause, or organization.*
  4. The program is important to my (our) success.*
  5. The program is motivational for me (us).
  6. The program is practical.
  7. The program contained new information.
  8. The program represented an excellent use of my time.*
  9. I will recommend the program to others (net promoter score).*
  10. I will use the concepts and materials from this program.*

*The measures will usually correlate with application.

One of the most popular metrics is number nine on the list, the net promoter score. This is calculated by taking the percentage of promoters and subtracting the detractors. (On a scale of zero to 10, if respondents select zero to six, they’re a detractor, seven to eight means they’re passive, and nine to 10 means they’re a promoter.) Also, the items with asterisks are more executive-friendly. It is essential to have a few appropriate metrics (we suggest four, nine, and 10) and stick with them to develop trends and benchmarking for most of your programs.

Level 2, Learning

Learning is at the heart of a program’s results, and measuring the success of learning can involve many tools, techniques and processes. But in the end, it is usually a simple acquisition of knowledge (concepts, facts, information, trends, etc.) and the skills acquired or enhanced. These two metrics are important to capture, and the key is to measure the extent to which people have learned what is necessary to make the program successful.

Level 3, Application

Application is defined as the routine and systematic use of the skills and knowledge acquired in the program. While many techniques are available to measure application, surveys are usually sufficient. We suggest you consider three key performance indicators measured on a 5-point scale.

  1. The extent of use.
  2. The frequency of use.
  3. The success with use.

With application, it is helpful to capture enablers and barriers. We know that learning typically breaks down at this level. Understanding what got in the way or kept the learning from being applied is beneficial for process improvement. Measure barriers with a forced-choice option during the follow-up evaluation, and capture the enablers so you can continue or increase them.

Level 4, Impact

There are two categories of impact metrics: tangible and intangible. Tangible metrics are those metrics that are easily converted to money. You can find hundreds of tangible and intangible metrics for a given organization, and Figure 2 shows some of the most important impact metrics.

Figure 2. Examples of tangible and intangible impact metricsย 

Tangibles Intangibles
  1. Retention
  2. Productivity
  3. Quality of work
  4. Complaints
  5. Conflicts
  6. Incidents
  7. Personal time savings
  8. Sales
  9. Promotions
  10. Response time
  1. Engagement
  2. Teamwork
  3. Collaboration
  4. Communications
  5. Customer experience
  6. Stress
  7. Happiness
  8. Creativity
  9. Employee experience
  10. Networking

Consider what tangible organizational metrics are essential to the top executives. These are metrics that the top executives monitor routinely and are concerned about constantly. Ideally, these metrics are connected to the program in the beginning. You can capture this data at the end of each program before the participants use the skills by asking how this program will influence these metrics using a 5-point scale. Then, in a follow-up, you can ask about the extent to which this program has influenced these metrics.

The intangibles are those metrics that are not generally converted to money. You can collect intangible data in the same way.

Level 5, ROI

ROI is the ultimate level on the value chain, as it compares the monetary benefits of the program to the costs. Evaluation at this level should be limited to those programs that are very expensive, strategic, and important. ROI is very insightful data; leaders want to know if they are getting enough out of the program to cover the cost.

This level involves converting data to money for each of the tangible impact metrics influenced by the program, tabulating the fully loaded cost of the program with direct and indirect costs, and calculating the ROI using two formulas (the benefit-cost ratio and ROI). Although the ultimate metric, ROI is not a metric that is routinely reported. It may be the most important metric you could deliver for executives because ROI shows that the program was worth it. Additionally, ROI may change the perception of learning because executives quickly see that L&D is an investment, not a cost, particularly if the ROI is positive.

Conclusion

Moving up the value chain, the key to having essential analytics is selecting the measures that reflect the success you need and satisfying your executives. The challenge for most learning and development teams is to take their evaluations to the next level, moving them up the value chain. It doesn’t matter how the learning is deliveredโ€”It’s the same set of metrics. These metrics are essential, telling, insightful, and can make a big difference in the support for L&Dโ€”and, yes, the funding for L&D in the future.

This article was originally published on March 1, 2023, on ChiefLearningOfficer.com.