
Artificially Intelligent Marketing
The episode opens with a deep dive into OpenAI’s latest release, GPT‑5.2, which boasts a massive 400,000‑token context window and claims near‑human performance on several benchmarks. Sam Altman’s internal “Code Red” memo signals urgency after the model slipped behind Google’s Gemini 3 and Anthropic’s Opus 4.5 in head‑to‑head tests. Listeners hear mixed user feedback: while the larger context promises richer agentic use cases, early adopters report higher hallucination rates and slower rollout to enterprise customers, raising questions about whether the improvements are genuine or benchmark‑focused.
Beyond the model race, the hosts explore shifting business strategies. OpenAI’s partnership with Disney, granting rights to generate content with iconic characters, underscores a consumer‑first pivot, while competitors like Mistral double‑down on on‑prem, fine‑tuned solutions for enterprise data security. Anthropic’s focus on code, Google’s all‑in consumer suite, and Opus 4.5’s superior instruction‑following for structured writing illustrate a fragmentation of the market. The discussion highlights that high‑quality technical writing may soon require custom‑tuned models, a niche where Mistral and similar firms could gain traction.
Finally, the conversation turns to AI‑driven search visibility. Recent research shows that classic “best‑of” listicles still rank strongly in AI chat results, especially when formatted in tables and positioned at the top of the list. However, the hosts warn that as major SEO tools and thought leaders publicize the tactic, AI models may adjust, diminishing its effectiveness. They advise marketers to combine listicle tactics with genuine value, robust content, and diversified SEO practices to future‑proof their AI search presence.
The AI model race intensifies as OpenAI rushes GPT 5.2 to market, whilst marketers discover a surprisingly effective shortcut to AI search visibility.
This week, we analyse OpenAI's scramble to compete after Google's Gemini 3 Pro and Anthropic's Claude Opus 4.5 threatened their dominance. We examine GPT 5.2's contested benchmark scores and whether the rushed release has created more problems than it solved. We also explore the resurgence of 2009-era SEO tactics for AI visibility, revealing how "best of" listicles are gaming generative search results—and how long that window might stay open. Plus: OpenAI's enterprise adoption report, Gemini's new native audio translation, and Martin's bricked Limitless pendant.
Key Takeaways
OpenAI's defensive launch: GPT 5.2 achieved impressive benchmark scores (53% on ARC-AGI vs 37.6% for Claude Opus), but user reports suggest degraded performance on real-world tasks, with concerns about benchmark optimisation over practical utility.
AI search visibility follows old playbooks: Publishing "best of" listicles with your company ranked first is proving remarkably effective, with results appearing within 1–2 weeks rather than the 3–12 months typical for SEO.
Enterprise adoption accelerates: OpenAI reports 8× growth in ChatGPT Enterprise usage, with workers reporting 40–60 minutes of daily time savings. 87% of IT workers report faster issue resolution, 85% of marketers report faster campaign execution.
Model providers face different futures: OpenAI pursues consumer markets through a Disney partnership for Sora-generated content, whilst Anthropic and Mistral focus explicitly on enterprise solutions, such as on-prem deployments.
Translation goes universal: Gemini's native audio model now supports real-time translation through any Bluetooth earphones, removing hardware restrictions that previously limited adoption.
Projects over prompts: OpenAI reports 19× increase in custom GPT and project usage, indicating a shift from casual querying to repeatable workflow automation.
What to Do Now
Test GPT 5.2 cautiously: If you have access, compare outputs against 5.1 for your specific use cases before switching workflows. Early reports suggest mixed results.
Deploy listicle strategy immediately: Create "best of" articles in your category with your company ranked first. Include detailed comparison tables. Speed matters—this window may close as AI providers refine their models.
Monitor your AI visibility: Check how ChatGPT Search, Gemini, and Claude answer queries about your product category. Track changes weekly to understand which content formats they prioritise.
Audit your project setup: If you're using ChatGPT Enterprise, review whether your projects contain too much general context. More focused, use-case-specific projects typically perform better.
Invest in genuine reviews: As AI providers wise up to self-published listicles, review platforms like Clutch, G2, or Google Reviews will likely become more important for AI search visibility.
Mentioned in This Episode
Platforms/Features: GPT 5.2, GPT 5.1 Pro, Gemini 3 Pro, Claude Opus 4.5, Nano Banana, ChatGPT Search, Notebook LM, Sora, Microsoft Copilot Pro, Manus
Companies: OpenAI, Google, Anthropic, Disney, Mistral AI, Limitless, Meta, HSBC, Boston Dynamics, Blend B2B
Tools: Ahrefs, Clutch, G2
References: ARC-AGI benchmark, OpenAI Academy, Ethan Mollick's prompt research, Moonshots podcast
Comments
Want to join the conversation?
Loading comments...