The 6 Biggest Measurement Pitfalls and How to Avoid Them

  1. The Afterthought

“Now that we’ve launched this incredible program, let’s figure out how to measure success!” No, no, no. While a late measurement discussion is better than no measurement discussion, the missed opportunity here is exponential. If you’re figuring out how to measure success post-implementation, then that means you missed the opportunity to fully level set on program sponsor/key stakeholder expectations for the programโ€™s chain of impact and quantification of anticipated outcomes.

How to Avoid it: There are very few resources that I love as much as ROI Institute’s V model, which you can access for free by registering here. The purpose of this tool is to ensure that the program sponsor and other key players are all on the same page. It begins on the left-hand side of the V, where program needs are defined at each of the five levels of measurement.

I have found that we often define what we want the learner to learn, but we don’t do as great a job setting specific expectations along the rest of the chain of impact. What exactly should learners do with the information they’ve learned? What does success look like? What observable behaviors are expected? Then, when these behaviors are present, what is the business impact we expect as a result? Itโ€™s important to be specific here. Just calling out that we want a particular KPI or two to improve isn’t enough. What is the target? How much improvement is expected? There is nothing worse than celebrating success only to find out your program sponsor was expecting a bigger impact. And I’m sure we’ve all heard of SMART goals at this point, so remember the “T” is for time bound, which also applies to program design. We want to discuss and align on when it is reasonable to expect the impact to manifest in our identified KPIs.

Lastly, it’s important to discuss ROI expectations and align on a target there, too. Believe it or not, I’ve been responsible for evaluating programs where the executive sponsor was not expecting a measurable ROI. All program benefits were anticipated as being intangible. While measurable, they could not be converted to reliable monetary values with reasonable time/effort. This is not an issue as long as all stakeholders are aligned with stopping at Level 4. Can you imagine not getting this alignment prior to program design and implementation? Trust me, I’ve been there, and it’s a place you don’t want to be.

  1. The Secret Objectives

This pitfall refers to something I see so frequently across many different initiatives- failure to disclose objectives beyond Level 1 to program participants. If you haven’t told participants what you expect them to learn, do, and impact, then you’re not properly setting them up for success, and you’re not going to get good Level 1 Reaction data.

For example, let’s say my husband comes home and tells me how to disassemble a transmission (he is a mechanic), and then he asks me, did I do a good job telling you how to disassemble a transmission? I would say yes, honey! Absolutely. You did a wonderful job; it was very interesting. Now, let’s say the following day, my husband tells me he needs me to disassemble a transmission by myself in one hour. Full disclosure: I have no idea if that’s reasonable because we haven’t ever discussed disassembling a transmission, but for argument’s sake, let’s pretend it’s possible to do, but only if you know what you’re doing. So now that my husband makes this request, I’m like, hold on. I am NOT equipped to do this on my own. You’re crazy. Telling me how to do it yesterday was not enough. If I had known how you wanted me to apply it, I would have taken notes, and when you asked me for feedback, I would have told you there was no way walking me through it once was going to equip me for success. Do you see where I’m going here?

It’s not enough to just tell or teach me about something. You need to be very specific about the expectations for application (frequency, duration, time period, etc.) and impact (what measurable outcome is expected to improve and exactly how much improvement is expected). I always see learning objectives, but I seldom see specific application objectives and their subsequent impact objectives.

How to Avoid it: Here, we are back at the V model. Before you can tell your participants about the program objectives, you’ve got to create them. How do you want people to react? What planned action are you hoping for? (Level 1) What do people need to learn? (Level 2) How should they apply it? (Level 3) What impact is expected? (Level 4) and while it’s good to have an ROI objective, that’s not one you necessarily need to go over with participants unless you’d like participants to understand the program cost and feel a sense of accountability for a positive ROI. But you’ve got to make sure the conditions are right for that to work- which is a whole other article in and of itself.

I’d also like to point out that if you need examples of what good objectives look like for Application (Level 3) and Impact (Level 4) then look no further than the ROI Methodology Application Guide, which you can access for free here.ย 

  1. The Cop Out

“It’s too hard.” Is it really, though? I think I’ve heard this excuse used for at least 15, maybe even closer to 20 programs. The reality is that the ROI Methodology is simple to apply, understand, and communicate. Now, I will acknowledge that not all programs are candidates for ROI but that doesn’t mean you can’t still measure Levels 1 through 4. To be fair, there are situations where it was determined that the level of effort to obtain and analyze impact data (Level 4) was greater than the benefit of collecting it. However, there’s a big difference between writing off the process as being too hard vs following the process and making the judgment to stop below Level 5.

How to Avoid it: This one can be tricky. You’ve got to root cause it. Why are you getting the “it’s too hard” pushback? Is it because your sponsor and/or decision maker doesn’t understand there is a methodology available, or are they just not familiar enough with the methodology to consider it credible? Unfortunately, I’ve also been in a situation where the program sponsor was concerned the study might yield unfavorable results, and I was blatantly instructed NOT to perform an evaluation on the program in question.

If you think the program sponsor (or other key stakeholder with decision-making power) may be resistant because they’re afraid of a negative ROI, that’s an opportunity to articulate the benefits of the methodology beyond the ROI in and of itself. The collection of enablers, barriers, and even specific behavioral and knowledge data can identify points of failure and enable correction. Avoiding the results won’t make the results any better. Of course, I would recommend having this conversation 1:1.

  1. The Surprise

“What we’ve got here is failure to communicate” – when you read this in your head, did it sound like the clip from Cool Hand Luke? When it comes to stakeholders, it should never be “Surprise! We measured this!”

How to Avoid it: For one thing, awareness of the measurement plan may drive accountability. When people know that you’ll be following up continuously for a period of time and that the results will be communicated to the program sponsor(s) and leadership, they’ll be more likely to continue application (participants) and/or reinforcement (leaders of participants). Now, this communication of results should never be a “gotcha!” moment that people resent you for. Be sure to provide helpful reminders to (leaders of) participants and be fully transparent about what will be reported and when.

I do have a quick cautionary tale here for you, though. Years ago, I was working as a representative in a sales/customer service call center. Customer satisfaction was an area of opportunity. After some research, it was determined that transfers were a big pain point (no kidding). So the directive came down from above that we need to reduce customer transfers. Well, just like a game of telephone, things get lost as they travel from person to person, right? By the time the message got to the front line, the true purpose (which was customer experience, of course) had been lost. The message had become, “reduce transfers.” Somebody had the “brilliant” idea to provide all of us representatives direct numbers to each department, and we were instructed to just give the customer the phone number and tell them to call themselves instead of transferring. Wallah! Transfer rates plummeted to record lows! Mission accomplished! Hopefully, you’re picking up on my sarcasm here.

My point is that when you focus people on the metric and lose the purpose, you risk a check-the-box, meet the metric at all costs situation. Be mindful of this when you’re collecting data and communicating about your data collection. Choose your words carefully and be mindful to drive the behavior, not the metric. Also, I can’t stress this enough: recognition, recognition, recognition, and a few sprinkles of shared success will go a long way. It’s important to close the loop with participants and show them their application leads to impact. Celebrate the wins and thank people for their contributions.

  1. The Vacuum

Stakeholder involvement is so important. Nothing can be successful if it is created or measured in a vacuum. Contribution drives ownership, and ownership drives success. Failure to get the right stakeholder involvement in your program design and measurement will impact engagement, response rates, and, therefore, the quality of data.

How to Avoid it: Some of my most successful programs have leveraged a team of champions. This is different than the core group that attends every project call. This group is representative of the participants- they will be participants themselves- and they provide feedback at several predetermined points. I publicly recognize this group because they deserve it, but also because they will then own it. It becomes their work, too. It’s like having a “friendly” in the rooms you can’t be in. They will encourage, promote, embrace, own, and champion the program and even the data collection. This is not the right thing for every program, but for those programs where it is right, it makes all the difference.

The other thing I would suggest is pulling in the owners of your data source as early as possible. For example, let’s say I roll out a new process where customer service representatives will be sending an email to customers to confirm their transaction. Part of my Level 3 (Application) measurement plan would likely include data on the frequency that representatives are capturing a valid email address and sending this email to their customers. This data is likely available- I would not use a survey/questionnaire to ask the representatives about the frequency of successful email address collection and how often they’re sending the actual email. I do not recommend waiting until you need the data to talk to the data owner.

Connect with them upfront as you are designing your data collection plan. Build the relationship and make sure you have a clear understanding of what can be provided and at what level of granularity. For example, if you’re using a control group for your program, can you isolate the data to view control group vs trained, or can they only provide high-level data by region?

  1. The Isolation- or lack thereof

The point of measurement is not to say look at meโ€”my initiative was successful! Though that is a benefit, the point of measurement is to learn about your program. What worked? What didn’t? What factors drove success, and what got in the way? What did we learn about drivers and detractors of success that we can apply to future initiatives? Was this a worthwhile investment? Why or why not?

Part of what you want to learn should be about contributing factors across the board, not just the ones from your program. Success can often be the perfect storm of conditions, and re-creating just one condition won’t necessarily yield future success. Just because we observe a positive change in a target KPI does not mean that positive change can be attributed to our program. What’s more, you will quickly damage relationships if you’re taking credit for what others perceive to be their work.

How to Avoid it: I don’t want to turn this into a full-blown lesson on isolation (though if you’re interested in that, you can sign up for ROI Institute Boot Camp or Certification!), so I’ll just provide one example here.

I had a program where we saw significant improvement in the target KPI, and neither control group nor trend line analysis was an option for this particular project. First, I used my team of champions to identify everything that was going on in the business at the time that could have potentially impacted the metric. In addition to the training program, there were some recognition incentives happening, leaders were also holding coaching sessions with participants where they reinforced the training, and in this case, the natural learning curve meant that over time, participants would have improved somewhat regardless of receiving additional training.

Next, using what was deemed to be the most reliable source, participant direct managers, we asked them to estimate the % improvement that could be attributed to each of these factors. There was also an “other” option where, if chosen, respondents would state specifically what factor beyond those listed had influenced the program results. Now for those more familiar with this process, you know we also adjusted for confidence, etc., but as I said, this is not a lesson on isolation. Instead what I’d like to point out is that we were able to provide feedback that included how impactful these other factors were in addition to the program, and we only took credit for the % improvement leaders directly attributed to the training.

This was valuable information, and ultimately, senior leadership was thrilled with the added insight and felt that the training probably did even more than we were taking credit for. This information also served to provide insight into the combination of factors that led to success, and those involved in the recognition incentive program felt acknowledged, too.

I hope you found this article helpful. Hungry for more? Please join me at our upcoming ROI Boot Camp or enroll in ROI Certification to earn your CRP and take your program measurement strategy to the next level!

This article was originally published on LinkedIn, January 8, 2024.

About the Author

Katharine Aldana, a two-time “ROI Practitioner of the Year” award winner, is known for her creative program improvement strategies. Katharine spent seven years designing learning strategies and performing program evaluations for a Fortune Top 20 organization before transitioning to her current role as Senior Manager-Business Strategy within the same organization. Katharine’s enthusiasm for program effectiveness and real-world experience make her a dynamic facilitator and highly sought-after ROI implementation resource.