TL;DR: As AI answer engines like OpenAI’s ChatGPT and Google Gemini reshape how people find information, marketers, publishers, and enterprise software teams are rolling out new tools and research initiatives to adapt content strategies for a zero-click world.
The shift from traditional search to AI-powered answer engines is accelerating fast. According to a Forrester Research analysis published in Destination CRM, ChatGPT now receives an average of eight prompts per user per day, compared to three queries per day on classic Google Search. Those prompts are also six times longer on average, signaling a deeper shift in how people engage with AI tools versus traditional search engines.
Zero-Click Search Forces Publishers and Marketers to Act
The practical fallout is that fewer users ever reach a website after searching. A Bain and Company survey cited by Demand Gen Report found that roughly 80% of consumers now rely on zero-click results in at least 40% of their searches. Forrester data adds that B2B buyers are adopting AI-powered search at three times the rate of general consumers.
In response, Informa TechTarget launched two new content solutions: an AI Visibility Audit and a GEO Topic Planner. The AI Visibility Audit helps marketers identify where their brand appears, or fails to appear, across AI-generated responses. The GEO Topic Planner aligns content plans with how AI systems rank topical authority to reduce wasted marketing spend. Informa TechTarget CEO Gary Nugent noted the company increased AI-driven traffic to its own media properties by 235% in 2025 and quadrupled membership sign-ups from AI referrals.
Research Initiative Benchmarks the Scale of AI Search Disruption
Academic publishers are also tracking the problem. Kudos expanded its “Taming the Crocodile” research initiative, adding sponsors including AIP Publishing, IEEE, Clarivate, and Cactus Communications. The study now includes surveys of both librarians and publishers to benchmark how AI-generated overviews are changing content usage, user behavior, and how quickly institutions are responding. The findings are intended to give publishers anonymized data to gauge their own progress against peers.
Microsoft Raises the Bar on Multi-Model AI Research Quality
On the enterprise side, Microsoft upgraded its Microsoft 365 Copilot Researcher agent with a new Critique mode that runs OpenAI’s GPT and Anthropic’s Claude models in sequence on the same task. GPT drafts the response, then Claude reviews it for accuracy, completeness, and citation integrity before delivery. Microsoft says the approach produced a 13.8% improvement on the DRACO benchmark, the industry standard for deep research quality. A companion Model Council feature lets users compare side-by-side responses from different AI models.
Microsoft also launched a Claude-powered Copilot Cowork agent designed for long-running, multi-step tasks. It turns high-level goals into structured plans executed across apps, files, and workflows over time, targeting use cases like security team task delegation.
What Marketers Should Do Now
Forrester principal analyst Nikhil Lai outlines four practical moves in his AEO guide.
Answer engine crawlers ignore robots.txt, revisit pages repeatedly, and hit sites with hundreds of requests per second, so technical hygiene matters more than ever. Brands also need to distribute content authored by subject matter experts beyond their own websites, since answer engines increasingly weight off-site citations over owned properties. Participating in forums like Reddit and managing reviews on aggregators builds the authority signals AI systems use when constructing answers.
- AI answer engines now receive significantly more, and longer, queries than traditional search, meaning brands that are absent from AI-generated answers are invisible to a growing share of potential buyers.
- Informa TechTarget’s GEO Topic Planner and AI Visibility Audit offer structured tools for diagnosing and closing AI visibility gaps.
- Microsoft’s dual-model GPT plus Claude approach shows that accuracy benchmarks for AI research are rising, raising the bar for content quality that AI systems choose to cite.
- Technical content infrastructure, such as updated sitemaps, schema markup, and pre-rendered pages, is now a prerequisite for AI crawler indexing, not just a nice-to-have.
- Off-site authority building, through forums, third-party experts, and review platforms, is increasingly central to how answer engines evaluate brand credibility.
Key Takeaways
- Zero-click AI search is now mainstream, with 80% of consumers skipping traditional results in nearly half of all searches, according to Bain and Company data.
- Generative Engine Optimization, or GEO, is emerging as a formal practice distinct from traditional SEO, with dedicated tools now available from players like Informa TechTarget.
- Microsoft’s 13.8% DRACO benchmark improvement from pairing GPT and Claude signals that multi-model workflows are becoming standard for high-quality AI research outputs.
- Content that answers specific questions, carries citations, and appears across authoritative off-site platforms performs best in AI answer engine results.
- Publishers and librarians are beginning to formally measure how AI overviews affect content usage, suggesting the industry is moving from awareness to structured response.
1 thought on “Zero-Click Search Forces Publishers: A Strategic Pivot”