Made to measure: Getting design leadership metrics right

| Article
Setting a North Star and combining qualitative data with quantitative measures can demonstrate the value and effectiveness of design.

Last year we posed a simple question to organizations: “Are you asking enough from your design leaders?” It’s an important question given that companies excelling in design grow revenues and shareholder returns at nearly twice the rate of their industry peers. Moreover, in these ruptured times, companies with a human-centered approach can better navigate and respond to the seismic shifts taking place in how we live, shop, and work as a result of the COVID-19 pandemic.

However, the results of our survey were, quite frankly, a bit dismal: only 10 percent of respondents reported realizing design’s full potential, a result driven primarily by a lack of clarity as to how design leaders can contribute and uncertainty about what to expect of them in their role.

Previously, we wrote about several interconnected interventions that can help elevate design’s performance and role in an organization. One has sparked more debate than the rest: the use of design metrics.

To be sure, design-related metrics are challenging to get right. They require a balance of empathy, qualitative insights, and quantitative awareness. It’s the last of these that has proven thorny to many, particularly when tying quantitative customer insights to financial performance and business actions. It is the ability to link design to value, however, that can unlock design’s strategic potential.

When crafted correctly, design metrics can give leaders accurate readings into the health and performance of a design organization so they can steer it effectively. One software company with well-built design metrics can now estimate its future revenues from new software releases, enabling staff to precisely target the design elements that will increase adoption and, ultimately, revenue.

Growing recognition from the C-suite of the impact unique, human-centered insights can have on the company’s bottom line means that design leaders will face increasing demand for them. As one medtech CEO put it, “If I don’t have robust data on whether my products are improving the lives and outcomes for my patients—and doing so better than my competitors—then shame on me.”

But how does a design leader get metrics right? In our experience, there are eight actions that design leaders can take to both capture the right metrics and elevate their use across the organization to maximize design’s impact. Taking inspiration from better-known 3D frameworks, we think of these activities as falling into three phases: Dream, or setting your strategy; Detail, or identifying the key performance indicators (KPIs); and Drive, or steering the company toward action (Exhibit 1).

1
Design leaders can craft an effective design-metric system by using a framework that focuses on three major areas.

Dream: Setting your strategy

To get the full value of design, organizations need to position their strategy around a North Star metric, which can be one metric or a small set of metrics that captures a particular business ambition around which business teams and leaders rally and to which design will contribute. The ambition might be to improve customer experience, develop new businesses, or improve organizational ways of working. The quantified metric can come from multiple sources: a user-experience metric, such as a customer-satisfaction score; an operational metric, such as customer retention; or a financial metric, such as revenues (see Exhibit 2 for examples).

2
North Star metrics typically take one of three forms.

Adopting a North Star metric is not a solitary endeavor for a design leader. It is often chosen in conjunction with other leaders so that teams strive toward the same goal. It is by working together that companies can create the elusive seamless experience that customers prize.

Set your North Star metric

Our research suggests that there is no one metric that is inherently better at driving success. Far more important is that the metric align with the company’s value proposition, resonate with business and functional leaders, and be easily measured and reported.

In many cases, we find this happens naturally, as design leaders often inherit their North Star metric as part of an existing business mandate. For example, a design leader sitting within the customer-experience, marketing, or digital function will likely see a strong preference for survey-based customer-satisfaction metrics and quantified targets that the function already embraces.

But in cases where leaders are given a more vague business ambition (for example, “help the organization become more design-led”), articulating a clear North Star metric for design that meets these criteria can build support from the outset and ensure their design goal is in lockstep with the business.

When setting a North Star metric for the team for the first time, it’s good practice for design leaders to test it with business leaders in the organization, especially when they are considering a metric not commonly used by the business. We recommend leaders start by running a small pilot project within a single business unit to show how the metric informs decisions, followed by a staged rollout as its value is proven. This can help translate the importance behind the metric and help the thinking become more accepted.

Would you like to learn more about McKinsey Design?

Detail: Building a holistic metrics system

Once you have set your North Star metric, you’ll need to know how to move it in the right direction. Doing so requires developing a system for measuring performance drivers at the right level of detail.

Use cascading KPIs

To target specific areas for improvement, leaders must first identify and link KPIs across each level of a product or service experience that can drive the North Star. This can include everything from pricing and branding to usability and product or service performance (Exhibit 3).

3
Metrics should cascade across each level of a product or service experience.

Consider, for instance, a North American software provider that has been able to estimate future revenues associated with new software releases thanks to its metrics system. Its North Star is a single customer-satisfaction score that expresses how widely a new feature or product is being used by customers. By breaking down this score for individual product lines and specific features based on the criteria that best gauges their performance (for example, for one software product, the score is based on percent of users using the feature, while another looks at amount of time spent using the feature) and drilling down within specific time periods (such as the month after a major release), leaders know sooner which product releases will meet revenue targets and which will need to be fine-tuned further.

Sometimes, however, there’s simply no direct relationship between, say, a survey-based score, such as a brand-recognition score, and journey-based measures, such as a customer-satisfaction score. In these cases, we find leaders can often quantify this relationship mathematically, using, at first, basic rules or heuristics (for instance, “We won’t invest unless it provides a 20-point customer-satisfaction score boost”) and, ultimately, a detailed correlation analysis. A financial-services company, for instance, traces its customer-experience measures, which include basic customer-satisfaction surveys and a customer-effort score, to a measure the business cares most about: transaction volumes. It calculates how transaction volumes change as the design team fixes specific pain points within the customer journey or solves for unmet needs in the experience. By linking them in this way, the company can, as it pilots new experiences, calculate the financial benefit based on the customer-experience benefit.

Certainly, this isn’t a quick activity—it can take months; however, the rewards, such as being able to prioritize improvements to specific journeys or products and derisking investment by delivering products and experiences customers actually want, are certainly worth it.

Measure journeys, not touchpoints

A common trap many companies fall into is measuring only one customer interaction, such as a single in-store experience. However, to fully understand the impact of design decisions, teams need to consider the entire customer journey by linking experiences across multiple visits and channels.

One European telecommunications company, for example, stitched together customer calls to its call center, and analysis of the data revealed that customer satisfaction dropped substantially among customers after they had to place a second call for assistance with the same problem. This insight helped the design team reprioritize the goals of its current work on redesigning how customers purchased and upgraded products. The team widened its aperture from optimizing design for a zero-touch experience, which was its original focus, to also improving design in a way that reduced the need for a customer to call more than once to solve a problem. This enabled design to improve the overall customer experience more comprehensively—and efficiently.

In another example, the team redesigning the digital channels of a hotel company initially intended to focus only on website conversion as its one critical goal. However, once the team began tracking website engagement and talking to guests, it discovered that longer visits (supported by new and improved content) provided a vital indicator of interest and ultimately led to conversion. Had the team removed engagement opportunities in order to streamline the touchpoints required to book a room, it would have eliminated the chance to help its guests dream of the vacations that awaited them and failed to build their confidence that its properties were the ones to choose.

Choose metrics that matter to the business

Often, there are multiple ways of measuring the same thing. Usability, for example, can be assessed with some combination of metrics as diverse as completion rates, efficiency, and survey-based satisfaction scores. So how can design leaders ensure they are choosing the right ones for their organization? Our experience suggests calibrating metrics with the business in mind and, most often, alongside business leaders to ensure alignment.

Design leaders should collaborate with adjacent business leaders to ensure all metrics can be easily understood and placed in context. At an Asia–Pacific consumer-goods company, for example, the chief design officer (CDO) worked with business leaders from the company’s research and development and marketing departments to develop an indicator for the future performance of one product. The exercise took a number of months, with dedicated teams from finance, marketing, R&D, and design contributing their proposed set of metrics and their weighting. Ultimately, the product was given a score that combined assessments of financial viability, technical feasibility, and its anticipated desirability by creating a weighted average across the three areas.

Since this was a joint effort and business leaders had an opportunity to weigh in, it was widely accepted. Additionally, because the company’s CDO led the development of this metric, he remained involved in discussions around new product launches, even after the initial framing role of the design team had tailed off, bringing fresh insights around user experiences to help agile teams create user-centric product strategies.

Metrics should also be easily defended and viewed as logical and credible by business leaders in the organization. For this reason, we find that including real-time measures from existing operational systems (something more than 60 percent of McKinsey Design Index respondents fail to do) often offers a better solution than using survey data on its own, which can appear arbitrary or overly subjective. Design leaders at one large corporation, for example, found that business leaders wouldn’t embrace a score they had developed to help anticipate the in-market performance of a new product because qualitative assessments of usability relied primarily on self-assessments, which the business leaders perceived as biased, even though the score proved a viable predictor of launch success. This isn’t to suggest that design leaders eschew qualitative measures altogether (in fact, in the next section we share why they’re imperative and how best to incorporate them in your metrics). But it is important for design leaders to question how the business might respond before relying on them too heavily.

Are you asking enough from your design leaders?

Are you asking enough from your design leaders?

Collect qualitative as well as quantitative measures

Our work has demonstrated that it’s essential for leaders to pair quantitative metrics with qualitative inputs, just as our research has shown that designers themselves should pair qualitative and quantitative insights and inputs in their own work. Certainly, qualitative measures can be tricky. In addition to sometimes facing skepticism from the business side, they can be challenging to obtain regularly from a broad enough base of customers to feel representative. However, we’ve found that only by combining them with quantitative measures can leaders capture the full value of design. In many cases, qualitative inputs provide the “why” that illuminates the quantitative “what” and show the best route to solutions.

Qualitative measures can take many forms, such as customer interviews, quotes from surveys, live demonstrations where customers make use of a product or service, or documented case studies. Many companies use aggregated statistics to help them understand broad trends in sentiment but then can quickly drill down to specific themes and comments to understand what is really happening. Some companies we know have customers join their monthly executive-committee sessions. At one large consumer-goods and pharmaceuticals company, the chief data officer makes use of a collection of powerful case-study stories that demonstrates the value that design has brought, alongside more quantitative data.

Track team performance

Of course, well-crafted metrics can move in the right direction only if the design team is performing well. For most leaders, measuring design-team performance includes assessing the work of designers and looking at measures such as design spend, utilization, and productivity. Collectively, these measures can help confirm that designers are performing effectively, and doing so within budget, which can have the knock-on effect of elevating design in the eyes of the wider leadership team, who might have the outdated view that design is a nice-to-have that comes at added cost. At one North American technology company, for instance, the head of design earned the respect of the CFO by not only staying on budget but also reducing design costs annually over six years. Doing this required not only performance metrics but also design-budget ownership.

Beyond the basic metrics around cost and efficiency, some design leaders make use of less common measures to understand their team’s performance as it relates to specific team goals. For example, at a multinational financial-services company seeking to turn design into a tool for innovation and strategy development, the design leader targets a failure rate of 75 percent for design projects to ensure that design teams are working on the most ambitious things. By working in agile sprints, the teams’ ability to course-correct quickly while testing and learning allows for experimentation that’s recognized as providing critical information rather than an unrecoverable failure or sunk cost.

Some companies measure the impact of design within the broader organization—for example, by tracking design methods, including how frequently teams involve users in product development and prototyping, even if the design function isn’t officially involved. One CEO we know sees (and measures) the evolution of design within the organization following three broad phases; first, a narrow focus on aesthetics and form; then an emphasis on end-to-end user experiences; and, ultimately, the final phase, in which design and design thinking infuse everything the company does.

Drive: Steering the company toward action

The goal of any metrics system is ultimately to drive action, and we find the most successful design leaders take steps to integrate these metrics into the fabric of their organization by regularly tracking and communicating progress with stakeholders and ensuring the metrics are embedded in performance reviews.

Visualize and share progress

We find that an effective way to track and communicate design’s performance against business and design goals is to create a dashboard featuring comprehensive and compelling visuals that convey the latest metrics as well as performance over time (interactive).

The most effective dashboards also include visible alerts that let design leaders know immediately when design is at risk of missing targets or metrics are moving in the wrong direction. Dashboards should also provide industry context, such as historical analysis or competitive insights (at the overall company level as well as for specific products and experiences), that help senior design executives understand how they measure up in the larger marketplace.

Of course, as with the design metrics themselves, dashboards should be built with, and not just for, the various stakeholder groups that will use them. While the design leader will serve as the dashboard’s power user, other senior leaders will consume the insights periodically, be it through a self-serve model or during leadership meetings in which executives share progress reports. (The interactive offers insights from dashboards used by some of the world’s leading chief design officers.)

Incentivize action

Credible, clearly communicated metrics show progress, but incentives often drive progress. Currently, only 20 percent of McKinsey Design Index respondents report using quantitative design measures in evaluations, and only 5 percent explicitly tie these to remuneration.

The best design leaders lobby to include relevant design measures in the performance reviews of senior leaders. While co-developing the metrics with these fellow leaders, as outlined earlier, makes this more feasible, the CEO will ultimately need to facilitate holding executives accountable for their contributions and tie incentives directly to performance. At one consumer-packaged-goods company in Europe, the CEO tied all executive remuneration to average product ratings as compared to the competition on Amazon, elevating design accountability to the entire C-suite. Other companies link the CEO’s own incentives to design metrics. Some companies we know have, for example, tied as much as 15 percent of CEO compensation to survey-based user-satisfaction measures.


Getting metrics right is no easy task for design leaders, who must work hand in hand with senior executives across the organization to stitch together a variety of measures that capture progress and impact. But the payoffs cannot be understated. Well-crafted metrics can ensure design executives stay on track to deliver results, unite senior leaders toward common objectives, illuminate and evangelize the value of design, and, ultimately, drive consumer satisfaction and company success. Design leaders who take stock of their progress against each of the initiatives we’ve outlined and ramp up those in which they’re lagging can be more certain they’re positioned to capture the benefits of clear design metrics.

Explore a career with us