Designing for signals: how intent & instrumentation shape AI-powered experiences

Published on:

As AI mirrors how we design and learn, our role evolves from creating interfaces to defining the signals that shape intelligent experiences.

A quick note before we dive in. This essay continues the thinking I introduced in From design to direction: bridging product design and AI thinking. While it is not required reading, it might give helpful context as I build on some of the ideas from that earlier exploration. Do not worry, I will be here when you return.

Illustration of a signal plus a robot element representing AI will lead towards more targeted generative experiences.
Illustration for how signals will be key in a more generative experiential future.

In my first thought piece, I discussed how AI concepts overlap with the work of product designers as we optimize across three key levels: the interfaces we aim to make frictionless, the journeys that help users find value, and the connection to the business outcomes we hope to achieve.

That work is becoming even more important today. As AI-driven and generative tools move into our day-to-day workflows, the experiences we design no longer stop at the interface. Interfaces can now generate, adapt, and learn from what users do next. The quality of those learning loops depends on the quality of the signals we build into them.

As we move from static interfaces to generative ones, understanding the signals produced by our experiences and how those signals tie back to a user’s intent becomes foundational. These signals help generative systems reduce model loss, meet expectations, and continue doing what we care about most: helping people make progress in ways they find meaningful.

In this essay, I will share an approach for connecting instrumentation efforts to high-level business goals and how that prepares us to map signals to intent in ways that shape iterative optimization loops for generative experiences.

Understanding what to measure: defining meaning before data

There is a lot of writing out there about instrumentation and telemetry, so I will only touch on the basics here. Many of us have heard terms like attrition, cart abandonment, or error rate. We usually hear them through conversations with product managers who reference these numbers as indicators of progress or where improvements are sought after. But I would ask you if you truly know terms like these mean when thinking about the bigger picture?

Let’s start simple and work out from there by settling on two things. First, let’s define telemetry as the raw data we collect. Instrumentation, on the other hand, is the act of deciding which of those data points matter.

Image illustrating websites with numerous signals causing confusion for AI.
Imagine signals as a bunch of news stations all going at once at the same time with the same volume. Just like how we would be unable to focus intelligently, the same can be said for AI.

We could instrument every single action in a product, but without knowing what is truly meaningful, all we have is noise. The same challenge applies to AI systems, believe it or not. A model trained on large amounts of similar data might recognize correlations, but without context or intent, it cannot know where to focus.

Deciding what matters does two things. It guides our analysis toward insight instead of noise, and it makes our systems more efficient. Over-instrumentation, taught to me by my data and analytics partners at a prior role, increases storage and processing costs and makes it harder for both humans and machines to understand what users are trying to do.

By deciding what to focus on before we collect data, we can curate meaningful signals. These signals connect behavior to intent and intent to organizational outcomes. That is the foundation of this framework: shaping experience data so it becomes direction for both product teams and intelligent systems. Bare with me and let’s talk business.

Defining what matters

Illustration depicting certain telemetry signals being selected for instrumentation efforts to educate our teams and AI tools.
When we contextualize signals around what is meaningful, both our teams and AI can start to make sense of all the noise.

When we design digital experiences, countless events occur on the front end and back end. Some happen autonomously, some because of user actions. Each is a potential signal, but we need a structured way to interpret them. One useful lens is Jobs to Be Done (JTBD).

Framework illustration listing, from top down, company goal, organizational goal(s), Measurement objectives/OKRs, KPIs, and Signals to show how these support one another.
Each row, from the bottom, should ladder up and support the goals above them to create a stable structure connecting in product signals up to company goals.

To start, most companies, no matter their size, are fundamentally trying to create value and generate revenue. If we acknowledge that end goal, we can place it at the top of our framework. You may already be familiar with terms like Average Contract Value (ACV) or Average Order Value (AOV). These are common ways organizations measure growth and understand how customers perceive the value they offer.

Image illustrating overall framework with AOV filled in at top for company goal conveying all subsequent elements support this high-level goal.
We always need to identify the reason we do what we do, and typically it involves money.

Within companies, different groups support the business in different ways. For example, a large online store might have a website team focused on the digital shopping experience and a customer support team helping customers resolve issues. If increasing AOV is a company priority, the website team may set a goal like:

Improve cross-sell or upsell effectiveness.

Illustration of framework with example organizational goal.
Various internal organizations will have their own goals typically created at executive levels.

From there, we next have the Measurement Objective. This is where we identify the type of behavior we want to influence to support the organizational goal. It is common for multiple measurement objectives to exist for the same goal, especially in larger companies.

In our online store example, one measurement objective might be encouraging users to add additional items before checking out. To make this meaningful, we can express it as an OKR. Atlassian has a helpful overview of how OKRs function in cross-functional teams if the concept is new.

A possible OKR might be:

Increase average items per cart from 1.6 to 1.9.

This is still fairly high-level, usually tied to a quarter or fiscal year, but it moves us closer to meaningful, measurable work.

Illustration of framework with example measurement objective added.
Remember OKRs are not KPIs and sit at a higher level.

Below this sits the Key Performance Indicator (KPI) layer. KPIs are the specific metrics we can directly influence through design and iteration. They give us the levers we can adjust and monitor as we learn. If KPIs feel a little abstract, Asana has a great explanation of how teams use them as measurable signals of progress.

A related KPI might be:

Increase add-to-cart rate for recommended items by fifteen percent this quarter.

This gives teams something concrete to experiment with, design for, and measure.

Illustration of framework with example KPI filled in.
KPIs can rotate quarterly (common) and still leave room for teams to ideate on how to meet that particular goal.

With all of this in place, we now have a path from:

AOV → organizational goal → measurement objective → OKR → KPI

Now we can move closer to the design layer: understanding user intent and determining the right signals to collect.

Connecting signals to intent

Illustration with signal section of framework filled in, directing audience to review table of examples below.
As alluded to, there is no one end-all signal. It will take a critical look and collaboration across product and data and analytics teams to capture the right signals for your org.

Let’s assume our KPI example from before aligns with a JTBD that looks like this:

When I am shopping for something I need, I want to easily discover related or complementary items so my order feels complete.

To understand whether users are making progress toward that job, we need to collect telemetry data from key activities. Not every event will matter, but the right ones reveal intent.

Table of sample signal categories which related to overarching shopper Job detailing how they potential signal various intents AI could pick up on.
Examples of signals related to the core Job and their connection to identifying a shopper’s intent.

Signals help us see not only what users are doing but whether they are moving in the direction that represents meaningful progress. Positive signals show alignment. Negative signals reveal friction or mismatched intent.

This creates a full path from business outcomes down to the behaviors that reflect real value.

How the framework extends into generative AI

Everything up to this point applies to traditional products where designers create fixed interfaces. Generative experiences change the structure of the work. Instead of designing every step, we are designing the conditions a system uses to decide what to show, what to emphasize, and how to adapt.

Generative systems do not follow pre-built screens. They follow patterns, predictions, and signals.

In generative experiences, intent can surface explicitly through a question, a prompt, or a choice, or implicitly through behavior. Google’s PAIR guidebook talks about this distinction between explicit intent and intent inferred from behavior, which forms the basis for how AI predicts what comes next.

Once the system infers intent, it predicts what the user needs next. That prediction determines what gets generated, whether that is a layout, a set of recommendations, content variations, or the next step in a flow. Even so, we still need a way to understand whether the prediction was correct. Microsoft’s Human-AI Interaction Guidelines reinforce this by encouraging systems to make their reasoning visible and understandable, so people can verify whether predictions make sense.

This is where the framework becomes essential. Organizational goals, OKRs, KPIs, and JTBDs give us a standard for what “good” looks like. They describe the journey and the progress that represent value for users. The signals we instrument become evidence of whether the generated experience actually supported that progress.

Even when a generative system produces something entirely new, we can still evaluate it. We are not comparing the UI to a previous layout. We are comparing behavior to expected signals. If users are moving forward, the inference was correct. If users hesitate, ignore content, or abandon, the system has made the wrong assumption. OpenAI describes a similar idea in their work on learning from human feedback, where systems compare predicted outcomes to desired ones.

This gives designers two places to focus optimization efforts. We can refine the experience that was generated, or we can refine how the system infers intent. Sometimes this means improving interface patterns. Sometimes it means strengthening the signals we collect. Sometimes it means giving users clearer ways to express what they want.

Either way, the work becomes the same. We check the signals produced by a generative interface against the expected signals tied to a JTBD, and we use that comparison to guide how the system adapts.

This is how designers contribute to AI-driven experiences. We are not only designing what users see. We are designing the understanding the system relies on to generate it.

Where we go from here

All of this sets up the next step. Generative systems cannot rely on signals alone. They also need to understand our components, patterns, and design systems in deeper, more semantic ways. Figma has written about this shift in their guidance on semantic design systems, emphasizing that AI needs component meaning, not just component visuals, to generate interfaces responsibly.

In the next article, I will explore how semantic design systems help shape what generative systems create, and how that connects to the broader ideas of encoding, pattern representation, and meaning within AI models.

Further reading

If you would like to explore some of the ideas mentioned in this article, here are a few helpful starting points:

Instrumentation and product measurement
• What is a KPI? — Asana
https://asana.com/resources/key-performance-indicator-kpi
• How OKRs work — Atlassian
https://www.atlassian.com/agile/agile-at-scale/okr

AI, intent, and prediction quality
• People + AI Guidebook — Google PAIR
https://pair.withgoogle.com/guidebook/
• Learning from human feedback — OpenAI
https://openai.com/index/learning-to-summarize-with-human-feedback/
• Guidelines for Human-AI Interaction — Microsoft Research
https://www.microsoft.com/en-us/research/project/guidelines-for-human-ai-interaction/

Design systems and meaning
• The future of design systems is semantic — Figma
https://www.figma.com/blog/the-future-of-design-systems-is-semantic/

These pieces reflect similar themes around intent, signals, and how intelligent systems learn. They each offer helpful perspectives on where our tools and practices may be heading.https://pair.withgoogle.com/guidebook/


Designing for signals: how intent & instrumentation shape AI-powered experiences was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source link

Related