Marcelo

How Struere Uses Experiwall to Track Whether ChatGPT Recommends Them

Marco built an AI agent platform. But he had no idea if ChatGPT would recommend it. Here's how he found out.

Lately I've been paying as much attention to my product appearing in ChatGPT as I do to it appearing in Google. People are discovering tools through AI now, and if you're building a product today, that's a channel worth watching.

A few weeks ago I had a call with Marco, the founder of Struere.dev. Struere is an AI agent platform with a built-in data layer, dynamic prompts, automation, and integrations. You define agents, data types, and automations as code, then talk to them via API. It's a solid tool for developers building AI-powered workflows.

Marco's problem was simple: he wanted AI to find and recommend Struere. He was doing the usual things for SEO, writing docs, building in public, shipping features. But he had no idea whether ChatGPT would actually mention Struere when someone asked for an AI agent platform.

When someone types 'best AI agent platform' into ChatGPT, does your product show up? Does it get compared to competitors? Does ChatGPT recommend it from its own knowledge, or does it only find it when browsing the web? Most founders can't answer these questions. There's no 'Google Search Console' for AI recommendations. You're just guessing.

That conversation pushed me to build something I'd been thinking about for a while. We call it AI Visibility, and it's live in Experiwall. Every 24 hours, we run up to 100 queries against ChatGPT on your behalf. The queries are based on your product, your category, and the kind of questions your potential users would ask.

For Struere, that means queries like 'best platform to build AI agents,' 'AI agent framework with built-in database,' 'alternative to LangChain for production agents,' and dozens more. For each query, we check three things: Does ChatGPT mention Struere at all? Does it compare Struere with competitors? And does it recommend Struere positively?

You get the results by email every day. You can see which queries triggered a mention, which ones didn't, and how the results change over time. When things improve, you get a jumping cat in your report. Small thing, but it makes opening the email fun.

The difference between this and just asking ChatGPT yourself is consistency. You could manually type 10 queries and check, but you wouldn't do it every day. You wouldn't track changes. You wouldn't notice that ChatGPT started recommending a competitor last Tuesday after they published a new blog post. You need automation to catch that.

Marco's first report surprised him. Some queries where he expected Struere to appear returned zero mentions. Others, where he wasn't even trying, had Struere as a top recommendation. That mismatch told him exactly where to focus: improving docs for specific use cases, writing content for the blind spots, doubling down on what was already working.

This is still early. We're collecting data, learning which query patterns produce the most useful signals, and building dashboards to visualize trends. But the idea is solid: if people discover tools through AI, you need to know whether AI recommends yours.

I think AI visibility will be as important as SEO in the next few years. Marco saw that right away. If you've never asked ChatGPT whether it would recommend your product, try it once. Then imagine doing that 100 times a day, automatically, with tracking.

If you want to see how ChatGPT talks about your product, sign up for Experiwall. We're in early access and AI Visibility is available now.

Ready to optimize your paywalls?

Start running experiments with Experiwall — free during early access.

Get Started Free