I think the situation with AI and security this talk highlights is kind of indicative of how AI disruption will play out. AI is going to make it really, really easy and cheap to find exploits in software. The end state for this is fantastic: software will be vastly more secure because AI will do superhuman security analysis on each change and effectively software won’t have security vulnerabilities. The transition to this may be a bit of a problem, though, as in the near term every few months more powerful models will expose more of the hidden backlog of security problems in existing software. I think this is a general pattern with AI disruption where you can see a much better post-AI equilibrium some years out we could get to, but getting from here to there could be bumpy.
For a long time great advice for founders was “don’t try to innovate on basic organizational practices.” The roles you need, executive jobs, ratios, spend in each area, and operational methods are kind of known in major classes of company....