Andrew Trask | It’s Time to Harvest the Secure AI Tech Tree

Foresight Institute
Foresight InstituteApr 14, 2026

Why It Matters

Secure AI foundations enable companies to protect user data, meet emerging regulations, and deploy trustworthy AI services at scale, directly impacting market competitiveness and risk management.

Key Takeaways

  • Secure AI sits at intersection of cryptography, deep learning, distributed systems.
  • Attribution-based control identified as core problem underlying many secure AI challenges.
  • Federated learning needs combined input and output privacy to prevent gradient leakage.
  • Trust-over-IP and hierarchical aggregation can mitigate copy and branching attacks.
  • Consolidating problem clusters accelerates progress on AI safety, alignment, and privacy.

Summary

Andrew Trask opened the session by presenting the "secure AI tech tree," a visual framework that maps the sprawling landscape where cryptography, deep learning, and distributed systems converge. The tree groups roughly five major subdivisions—privacy‑preserving collaboration, attribution control, trust mechanisms, and related sub‑clusters—each populated with specific research problems and emerging solutions. He argued that many of these disparate issues trace back to a single, under‑appreciated root: insufficient attribution‑based control, which he broke into three sub‑problems—addition, copy, and branching. In practice, this manifests in federated learning deployments, where gradient updates can leak private data unless both input and output privacy (e.g., differential privacy) are applied. Trask highlighted the need for hierarchical trust‑over‑IP models, suggesting that aggregating updates within trusted sub‑networks before broader dissemination can curb poisoning and copy attacks. Concrete examples punctuated his talk: Google’s 2017‑18 federated learning rollout for on‑device language models, the vulnerability of gradient leakage without differential privacy, and the paradox that a model trained on a fraction of the world’s data can still expose individuals when combined with output‑privacy gaps. He also referenced collaborative work with William Isaac, Eric Drexler, and others on privacy‑enhanced technologies that map specific neural activations to data sources, illustrating how attribution can be enforced. The broader implication is that by collapsing a sprawling list of problems into a tighter set centered on attribution control, researchers and firms can prioritize tooling, standards, and cross‑disciplinary partnerships. This streamlined focus promises faster advances toward AI safety, regulatory compliance, and commercial trust, turning abstract security concerns into actionable product roadmaps.

Original Description

Subscribe to Foresight Secure AI Seminar Group: https://foresight.org/seminar/secure-ai-seminar-group/
Supporting researchers, engineers, and entrepreneurs in computer science, ML, crypto, security, and related fields who leverage those technologies to improve cooperation across humans and ultimately AIs.
Andrew Trask | It’s Time to Harvest the Secure AI Tech Tree
Abstract: Foresight's "Secure AI Tech Tree" represents one of the clearest and most complete taxonomies of Secure AI ingredients available, complete with a catalog of problems with each ingredient's solo use and directions for solutions. Scanning the tree, one observes that the taxonomy of solutions is almost universally formed through combinations with other branches. Yet the tree leaves these "solution combinations" unresolved—a tapestry of dis-integrated observations about ingredient-pairings that can work. So what is the final product when they're actually combined? What is Secure AI?
In this talk, Andrew Trask will attempt to harvest the Secure AI Tech Tree and describe its vision in the form of an integrated theory of Secure AI. He will survey the combination of deep learning, cryptography, and distributed systems technologies listed on the tree, describing a fully combined integration which theoretically addresses many of the key problems in cooperative, privacy-preserving, secure, robust, transparent, verifiable, and aligned AI—the high-level subjects of the tree. In doing so, this talk will reveal the Tech Tree's solutions to many low-level problems while also uncovering a higher-order problem: what are the incentives that would trigger such an integrated solution, what are the problems preventing those incentives, and how can they be overcome? Trask believes this perspective of the tree can reveal the final major hurdle to broad Secure AI adoption in the world. Attendees should come ready for a rich, spicy discussion about the Secure AI Tech Tree and the next steps for the Secure AI community.
Speaker Bio: Andrew Trask is the Founder of OpenMined, a PhD Candidate at the University of Oxford, and a Senior Research Scientist at DeepMind. For the past ~decade in these roles, Andrew has been seeking to understand the ingredients involved in Secure AI and construct an integrated system and theory of change for their adoption, the subject of which is his (nearly complete) PhD Thesis, OpenMined’s catalog of prototypes, pilots, and papers, and which is informed by his time on DeepMind’s language modelling research team (2017-2022) and ethics team (2022 onward).
══════════════════════════════════════
About The Foresight Institute
The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1986 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support. From molecular nanotechnology, to brain-computer interfaces, space exploration, cryptocommerce, and AI, Foresight gathers leading minds to advance research and accelerate progress toward flourishing futures.
We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page: https://foresight.org/donate/
Visit https://foresight.org, subscribe to our channel for more videos or join us here:
══════════════════════════════════════
Timecodes
00:00 Secure AI Framing
02:30 Tech Tree Overview
04:20 Attribution Based Control
07:58 Federated Learning Limits
13:40 Privacy Copyright Alignment
16:20 Three Core Problems
20:05 Privacy Budgets Markets
31:40 Licklider And Agents
35:45 Persistent Memory Security
38:15 Interpretability Versus Attribution
42:25 Adoption And Standards

Comments

Want to join the conversation?

Loading comments...