how to evaluate your optimization program feature

How To Evaluate Your Optimization Program

To assess the health of your digital experience optimization efforts, it's crucial to move beyond measurement. Here's how to evaluate instead.

Key Takeaways

By the end of this article, you should have the knowledge and resources to “check the box” in these areas…

  • Why differentiating between evaluation and measurement is crucial to a successful optimization program
  • How to take a health pulse on your whole digital experience improvement effort
  • When to supplement evaluation with measurement and how to do it

Your optimization program health is more than just your conversion rate. It’s tempting to boil everything down to a single number, but that often fails to tell the full story of your effort to improve your site or app’s digital experience.

Of course, you still have to gauge the effectiveness of your optimization program in some way. That’s where a shift in thinking from measurement to evaluation comes in.

Imagine you’re tasked with building a house. You could measure progress by counting the number of bricks laid each day. However, your progress is actually much more dimensional.

Are the bricks being laid in the right places? Do they align with the architectural plans? Do they contribute to the overall structural integrity and aesthetic appeal of the house?

Measurement is counting the laid bricks. Evaluation prompts you to step back and consider the quality of the craftsmanship, the efficiency of the construction process, and whether the final result meets the needs and expectations of those who will live in it.

In the same way, evaluating (instead of just measuring) an optimization program involves looking beyond surface-level metrics like conversion rates to understand:

  • How each initiative contributes to the larger strategic goals
  • If efforts are aligned with user needs
  • Where strategies can be refined based on qualitative insights

Just as a well-built house requires thoughtful planning, skilled labor, and ongoing evaluation to ensure its success, a robust optimization program should be evaluated based on both quantitative metrics and qualitative impact.

Today I want to share more about how to shift your thinking from measurement to evaluation and ultimately paint a more holistic picture of your optimization program.

Evaluation vs Measurement

When we talk about evaluating your program, it’s important to differentiate between evaluation and measurement.

While measurement quantifies success based on specific numbers such as conversion rates or click-through rates, evaluation delves deeper by assessing the broader qualitative implications of efforts within the context of the overall strategy.

Evaluation examines the underlying factors influencing the measured outcomes. By evaluating, rather than solely measuring, organizations gain a comprehensive understanding of what drives success and can make informed changes to everything from their experimentation queue to their team structure.

So, how do you do it?

Evaluating Your Optimization Toolbox To Understand Program Health

There are five competencies of successful optimization teams that can be used to guide your evaluation process. Grouped by what we call the 5 factors, these pillars of optimization make up a robust toolbox.

Think about it like a health pulse for your whole digital experience improvement effort. The 5 factors consider both the input and the outcomes, uncovering where your team is succeeding and where you might need more support.

1. Data Foundations

The first competency is data foundations, which includes all of the basic and advanced analytics that will inform the rest of your decision-making. This data should come from a trusted source, be hygienic (meaning it’s tracked properly), and be accessible to whoever needs it in your organization.

There is plenty you could test or change on a website that wouldn’t lead to much (if any) growth.

A strong data foundation helps you prioritize your efforts so you can focus on activities that have the biggest impact.

Evaluate your data foundations based on these criteria:

  • Your goals are clear, and your team is aligned on them
  • Your data comes from a trustworthy source of truth (like Google Analytics or Big Query)
  • The optimization team has both ownership and authority
  • You use data to prioritize your optimization efforts

2. User-Centered Approach

Understanding user behaviors and goals helps you design an experience that increases engagement and conversions. The most successful digital leaders have a user-centered mindset that guides all of their testing and implementations. This is crucial to evaluate when trying to understand your program’s health.

A good optimization program includes a data-backed understanding of visitor’s demographics and entry context. It also includes qualitative information about the customer base, such as motivations, triggers, and pain points.

Armed with this understanding, you can create a strategic roadmap that outlines the top priorities of an optimization program. These priorities should align with user challenges, not the executive’s opinions.

How can you evaluate if these measures have been successful?

  • Reflecting on the completion of a strategic roadmap with top priorities to improve your key metrics.
  • Survey your team to determine if they’ve built a deepened understanding of their audience.
  • Fill out the 5-Factors Scorecard™

3. Resourcing (Skills)

Successful optimization programs require diverse expertise: research, marketing, data analysis, design, and engineering, among others. It takes a myriad of expertise to move through the optimization process properly.

If you have a large optimization engine, your program likely employs many experts who contribute to a network of expertise. On small teams, marketers are often tasked with wearing all the hats, even when they lack the expertise to do so.

Having a lean team doesn’t have to prevent you from having a big impact. A well-designed optimization team doesn’t need all of these disciplines in-house; it just needs access to them.

In most cases, if you evaluate your program and find gaps in key functions, outsourcing a group of disciplines can be preferable to hiring in-house because you gain the expertise of a whole team rather than just one person.

Leveraging a team of experts, like The Good, is an efficient way to gain access to valuable resources, fill out gaps in your organization’s chart with capable people, and build a backlog of experiments for testing and implementation.

To evaluate your resources, consider these factors:

  • The number of disciplines that contribute to your optimization efforts
  • The amount of time each discipline dedicates per week (more than 5 hours per week is ideal)
  • The number of experiments in your backlog
  • How often research is conducted on your website or app
  • Testing methodologies your team is up-skilled on

4. Toolkit

Your toolkit refers to the variety of tools you use for planning, measurement, and protocols with the ultimate goal of improving your digital experience. These tools keep you moving toward your goals and fall into three categories:

  • Prioritization: You have a standardized project planning process, usually in the form of a prioritization framework.
  • Research: You conduct generative research (to understand your audience) and evaluative research (to understand if your solutions work).
  • Experimentation: You’re set up to perform on-site experiments (in the form of A/B testing) and off-site validations (such as card sorting, preference testing, tree testing, and first-click testing).

If you’re working with a prioritization framework like RICE, ICE, or PIE, you’re probably on track for success. Just be sure your team is using them objectively and not shoe-horning their own ideas into the framework.

Additionally, you can evaluate your toolkit based on how quickly you can identify meaningful experiments through your prioritization framework and the breadth of those experiments. If your toolkit constantly produces meaningful hypotheses about website or app improvements, you’ll always have ways to improve.

5. Impact & Buy In

Optimization programs that have the greatest impact also have the greatest buy-in from their leadership. Leaders who care about optimization tend to bring budgets, resources (tools and people), and the right culture for incremental change.

Your impact refers to your program’s outcomes. Keep in mind, however, that this isn’t only measured by the conversion rate. We also look at:

  • The number of research projects completed. More information means a better understanding of your users, which affects all of your future experiments.
  • The number of experiments launched. More experiments mean progress toward improvement, even if it’s tough to measure.
  • The number of changes implemented. A change means you identified a valuable improvement, so the more, the better.

It’s important to consider the intangibles as well. For instance, if you were to develop a large catalog of insights and research that can be used to inform marketing AND efforts across other departments at the organization, you might choose to evaluate the success by the success of building your insights library or how many times cross-functional teams access the library.

Of course, your impact can also be measured by annualized revenue gains on experiments and ROI based on that number. But, it’s just one evaluation method when assessing the health of a fully functioning, successful optimization program.

Supplement Evaluation With Measurement

We evaluate optimization programs against the five competencies I outlined above, but your internal team is best advised to supplement the understanding of the health of your digital product with measurement.

Here are a few metrics that you’re likely already tracking these numbers (if not, it’s time to start) and can give additional context to your evaluation.

Keep in mind these metrics are most informative when tracked over time. A single data point is rarely helpful, but the change between intervals can be informative, and as a supplement to evaluating your optimization toolkit, it can help you make better decisions.

Net Promoter Score (NPS)

Net Promoter Score measures customer loyalty and satisfaction by asking customers how likely they are to recommend a product or service to others on a scale from 0 to 10.

Scores are categorized into promoters (9-10), passives (7-8), and detractors (0-6). The NPS is calculated by subtracting the percentage of detractors from the percentage of promoters.

NPS indicates how well your brand is meeting customer needs and expectations. A high NPS can lead to positive word-of-mouth and referrals. You can also use it to identify areas for improvement and to foster customer loyalty.

Customer Satisfaction

Customer satisfaction is a measure of how products or services supplied by a company meet or exceed customer expectations. We gather this information through surveys, feedback forms, and direct customer interactions.

Satisfied customers are more likely to buy again, become loyal advocates for the brand, provide positive reviews, and generate referrals.

Revenue per Customer

Revenue per Customer is the average amount of money that a company earns from each customer over a specified period. It is calculated by dividing the total revenue by the number of customers.

RPC helps you identify the profitability of your customer base. You can increase your RPC through upselling, cross-selling, and improving customer engagement.

Customer Acquisition Cost (CAC)

Customer Acquisition Cost is the total cost of acquiring a new customer, including marketing, advertising, sales expenses, and any other associated costs. It is calculated by dividing the total acquisition costs by the number of new customers acquired during a specific period.

CAC helps you determine the efficiency and effectiveness of your marketing and sales efforts. Ideally, you want to keep this low to ensure a positive ROI. Monitoring it helps you identify cost-saving opportunities, refine your marketing strategies, and improve your overall profitability.

Customer Lifetime Value (CLV)

Customer Lifetime Value is the total revenue you expect to earn from a customer over the entire duration of the relationship. It considers factors such as average purchase value, purchase frequency, and customer lifespan.

CLV helps you understand the long-term value of your customers and prioritize your investments in customer retention and engagement strategies. Maximizing CLV helps you boost profitability and sustained growth.

All Optimization Programs are Unique

As you probably imagine, your optimization program will look different than the work of your competitors. For instance, the data you collect will depend on your features, products, service offerings, and other factors. Other brands will collect different data based on their needs.

The bottom line, therefore, is that you should focus on what’s important to you but don’t rely solely on one metric to tell the full story of your work.

Evaluate how much you’re getting done, the overall impact on the organization, and the total value of experiments and optimizations to your users. This is the clearest way to tell if your digital optimization work was successful.

When it comes to optimization, you don’t have to do it alone. The Good can boost your impact by providing the tools, techniques, and expertise that you just can’t find in a single hire.

We’ll help you build a strategy and tactical roadmap that sets you on the path toward an optimized experience that engages users and meets your organization’s goals.

Learn more about our Digital Experience Optimization Program™.

Jon MacDonald smiling at the camera for The Good

About the Author

Jon MacDonald

Jon MacDonald is founder and President of The Good, a digital experience optimization firm that has achieved results for some of the largest companies including Adobe, Nike, Xerox, Verizon, Intel and more. Jon regularly contributes to publications like Entrepreneur and Inc.