Intercom Doubles Productivity in Nine Months with AI
In Intercom we literally doubled productivity in 9 months. Depend on your attitude to AI you'll either say "Bullshit" or "That all?". In both cases I think you're being ridiculous. This is a 500 person org working on an 8 million lines-of-code product. And we doubled productivity. In the post below we share every detail, from measurements to costs to everything in between. Have a read
Fin's “Ask A Teammate” Speeds Approvals via SlackFin's “Ask A
"We need a someone to approve this" is a very common statement in Customer Support (e.g. a large refund, or info that needs verification). Fin now supports this with our (just shipped) "Ask A Teammate" feature. Fin will now go...
Scaling AI Feature From One to 8000 Customers
Thrilled to see this one ship to everyone. It's one thing to hand roll an AI feature once for one customer. Rolling this out reliably to 8000 customers is a different type of challenge. Kudos to Pratik & Team...

Stateless or Solo AI Leads to Dogfight Chaos
A lot of AI rocketships end up in the dogfight category because they're either stateless: easy/free to hot swap one for another (and another) or single player: I use [X] and my teammates use [Y,Z] and it just doesn't matter at all....
AI Era's New Table Stakes for Adaptation and Competition
This is a great article about the new table-stakes of adapting to, and competing in, the AI era.
Startups Inflate ARR: Not Annual, Not Recurring, Not Revenue
Startups misreporting ARR is frustrating as an infrequent angel investor who relies on trust (no time to inspect) ~Every startup claiming 0→$[X]M ARR in [Y] months always forget to clarify that it's either a) not really Annual b) not really...

Halve KV‑Cache Memory by Sharing Attention Across Heads
Our AI group published a novel finding from pre-training research In short: you can cut KV-cache memory in half by sharing most of the attention structure across heads + keeping small per-head differences—without hurting model quality or speed I'll explain this as...

LLMs Dictate Tech Stacks; Every Business Needs a CLI
LLMs decide what languages and libraries we use. They’ll also decide what products a business runs on. They'll prefer ones that work with them. Every business now needs a CLI. fin(.)ai/cli https://t.co/ycezh9XYWd
AI Reveals Who’s Engaged, Ambitious, and Brave
AI tests your company's 1 Engagement: who's paying attention in their role, company, industry over the past 4 years 2 Ambition: who wants to win, who just wants to stay alive, who's happy to fade away slowly… 3 Leadership: who'll make brave decisions...
Apex Model Now Accessible via API
The #1 question I got last week was "Will @Fin_ai's Apex model be available as an API" The answer, now, is yes. Good read below as to why...
Productivity Edge Is Short‑lived; Act Now
Rory is correct. For startups extreme productivity is already tablestakes amongst your peers. For larger orgs there's maybe a year of edge in it against your big competitors before it's just adapted or died. In no world do you have...

Old‑School ML Turns Messy Support Chats Into Actionable Insights
Great detailed write-up by Mariia explaining how we built topic modelling that turns surprisingly messy support chats into structured & applicable actionable insights A fun reminder that "traditional" ML (e.g. >4 years old) is still very useful https://t.co/vo9KfPIg6q https://t.co/KFMGcHYHxN
Insights on Launching and Scaling AI Startups with Kyle Poyar
It's always great to chat with Kyle Poyar for his new series all about launching and growing as an AI startup, check it out below
Commit to Doubling Output: Set, Communicate, Execute
More details on our 2× initiative here, if I could offer generalized advice it's just 2 things 1. Make and Communicate a firm decision that you want a massive measured boost in productivity. (Ours was to 2× output in 1...
Claude Code Generates 90% of PRs After 2X Initiative
In June 2025 @darraghcurran announced our 2X initiative (link in thread). Below @brian_scanlan shares where we're at today (~90% of PRs created by Claude Code). It's exciting to see all this work come to fruition.