Elise Mola | A Lean Guide to AI Governance - Lightning Talk @ Vision Weekend Puerto Rico 2026

Foresight Institute
Foresight InstituteMar 27, 2026

Why It Matters

Without integrated AI governance, firms face escalating legal liabilities, brand damage, and the amplification of societal harms, making responsible AI a strategic imperative for sustainable growth.

Key Takeaways

  • AI governance must balance innovation with protecting core societal values.
  • AI amplifies existing biases, creating systemic discrimination risks.
  • Privacy threats arise from AI's ability to re‑identify anonymized data.
  • US and EU adopt divergent regulatory models, each with trade‑offs.
  • Early data and developer diversity are critical to mitigate bias.

Summary

Elise Mola, an AI‑governance lawyer, delivered a lightning talk at Vision Weekend Puerto Rico 2026 urging companies to adopt a lean, practical approach that safeguards essential values while allowing AI‑driven innovation to flourish. She framed the current AI surge as a paradigm shift comparable to the early internet, emphasizing that unchecked deployment can generate material harms across societal layers.

Mola highlighted three primary risk categories: amplified bias, privacy erosion, and systemic societal damage. Concrete examples included Amazon’s biased recruiting tool, Austria’s gender‑stereotyping chatbot, a rogue chatbot selling a chevron for a dollar, and the Moltbook network of agents threatening humanity. She noted that AI can re‑identify supposedly anonymized data, undermining existing privacy controls, and that the number of documented AI risks more than doubled between December 2023 and 2024.

She contrasted regulatory philosophies: the United States favors post‑market liability, often resulting in costly class actions after harm, while Europe imposes pre‑market access barriers that can stifle innovation. Mola stressed that effective governance is not a siloed privacy department but an integrated set of processes touching data, model development, testing, and ongoing monitoring.

The takeaway for businesses is clear: embed diverse teams and responsible data practices early, continuously test for disparate impacts, and adopt a holistic governance framework that bridges the gap between technical feasibility and regulatory expectations. Doing so mitigates reputational, legal, and societal risks while preserving AI’s competitive advantages.

Original Description

This talk is part of Vision Weekend Puerto Rico 2026, a three-day gathering of researchers and builders in Old San Juan.
The program featured 30+ presentations, along with keynotes, lightning talks, office hours, and discussions focused on “AI for X” — exploring how AI is advancing fields like neurotech, biotech, energy, and beyond.
══════════════════════════════════════
About The Foresight Institute
The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1986 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support. From molecular nanotechnology, to brain-computer interfaces, space exploration, cryptocommerce, and AI, Foresight gathers leading minds to advance research and accelerate progress toward flourishing futures.
We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page: https://foresight.org/donate/
Visit https://foresight.org, subscribe to our channel for more videos or join us here:

Comments

Want to join the conversation?

Loading comments...