ALAN WOLK (AW): What exactly is Comscore measuring
SMRITI SHARMA (SS): What we’re doing is measuring how AI tools are actually being used—how they affect consumer journeys, search behavior, click-through rates, and ad campaigns. Think of it this way: whether it’s ChatGPT, Gemini, or any of the other tools, we can track adoption, how people interact with them, and what that means for companies trying to reach those consumers.
It’s not limited to one platform or one kind of AI. There are a lot of unanswered questions: Are these tools protecting privacy? How are they changing the path to purchase? We want to give the industry a clear picture of where things are heading.
AW: You mentioned that you’re tracking 117 AI tools across nine categories. How did you decide what to start with?
SS: We used Comscore’s existing measurement assets—our desktop meter, mobile meter, and Total Home panel. These are opt-in, privacy-focused systems that we’ve relied on for years.
The first 117 tools are really a starting point—something to get into the market, see how clients respond, and then scale. Because the assets are homegrown and expandable, we’re not dependent on anyone else. That means we can go as broad as the market demands, and we can do it globally, not just in the U.S. This isn’t a one-off project—it’s designed to grow with the industry.
AW: Can you walk me through the framework of what you’re actually measuring?
SS: Right now we are looking at three different levels.
Visitation: Who’s using which tools, how adoption is trending, and how usage breaks down across demographics.
Search behavior: How AI-generated answers are affecting click-through rates, and which sites are feeding those summaries.
Prompts and responses: What people are actually asking the tools, what answers they get back, and how the conversation flows.
That structure lets us go deeper than just counting visits. It shows the full journey.
AW: For the first use case, visitation, how are you categorizing the tools?
SS: We classify them by use case—assistants, super tools, image generation, productivity, audio, design, and so on. That way we can see not only which tools are growing but what people are using them for. For example, a tool might spike in music-related tasks but hardly register in presentations. We can also track momentum over time. Tools like Claude have doubled usage in just a few months, while others launch with a splash and then fade.
AW: What about search—what are you looking at there?
SS: Search is changing fast. On Google you might see an AI-generated overview instead of a list of links. On Bing, Copilot sits next to traditional results. We track when and why those AI answers appear, and how that alters click-through behavior. If people get the summary they need, they may stop there. That has real implications for publishers who depend on search traffic.
The next step, which we’re building toward, is identifying which sites feed those summaries. If a summary is mostly built from three domains, and those sites see a sudden traffic drop, you can connect that back to the AI layer. That’s the kind of cause-and-effect the market is asking us to surface.
AW: Does that tie into this idea that “GEO” or GPT Engine Optimization, will become the new SEO?
SS: Exactly. Companies want to know why some queries trigger an AI summary while others don’t. They want to know what’s driving those summaries and how customer journeys differ when an AI overview is present. That’s the shift from SEO to GEO in action.
AW: The third layer is prompts and responses. What is the methodology there?
SS: We can see what people type into tools like ChatGPT, Copilot, or Gemini, and how the dialogue unfolds. That means we’re not just looking at the final answer but the path to get there—the follow-up questions, the refinements, when people stop.
We can also segment by audience. For example, what do 18–24-year-old men ask versus older demographics? That helps AI developers understand where models need improvement, and it helps companies see how different groups are using the tools. It also lets us identify gaps—cases where people ask open-ended questions but don’t get complete answers. Those gaps highlight opportunities for AI companies to retrain and improve their models.
AW: So you’re mapping entire conversations, not just single interactions?
SS: Yes. Because the data is opt-in and session-based, we can connect it across platforms. That gives us visibility into gaps between what people want and what they get. Those gaps are opportunities for companies to train models better and for marketers to understand evolving behavior.
AW: How frequently are you planning to publish your results?
SS: We put the system into production in July. The first results are already out, and we’re updating monthly. For example, September reports reflect August data, broken down by verticals and visitation patterns. And because we can track historically from the moment a tool launches, we’re able to show its entire growth curve. This isn’t just a snapshot—it’s a way to understand how adoption develops over time. And again, it’s global—we’re not restricted to U.S. usage.
AW: Where do you see this heading—do you have a roadmap for where you want to take it next?
SS: This is only the beginning. Right now we’re measuring visitation, search impact, and prompts. But AI is evolving daily, and so is our roadmap. The demand is already strong—from companies building AI tools and from those just trying to understand how AI affects their customers. The goal is to keep expanding coverage, add more detail, and give the market the clarity it’s asking for.
Here are Comscore’s latest topline desktop and mobile stats.