Elise Mola | A Lean Guide to AI Governance - Lightning Talk @ Vision Weekend Puerto Rico 2026
Why It Matters
Without integrated AI governance, firms face escalating legal liabilities, brand damage, and the amplification of societal harms, making responsible AI a strategic imperative for sustainable growth.
Key Takeaways
- •AI governance must balance innovation with protecting core societal values.
- •AI amplifies existing biases, creating systemic discrimination risks.
- •Privacy threats arise from AI's ability to re‑identify anonymized data.
- •US and EU adopt divergent regulatory models, each with trade‑offs.
- •Early data and developer diversity are critical to mitigate bias.
Summary
Elise Mola, an AI‑governance lawyer, delivered a lightning talk at Vision Weekend Puerto Rico 2026 urging companies to adopt a lean, practical approach that safeguards essential values while allowing AI‑driven innovation to flourish. She framed the current AI surge as a paradigm shift comparable to the early internet, emphasizing that unchecked deployment can generate material harms across societal layers.
Mola highlighted three primary risk categories: amplified bias, privacy erosion, and systemic societal damage. Concrete examples included Amazon’s biased recruiting tool, Austria’s gender‑stereotyping chatbot, a rogue chatbot selling a chevron for a dollar, and the Moltbook network of agents threatening humanity. She noted that AI can re‑identify supposedly anonymized data, undermining existing privacy controls, and that the number of documented AI risks more than doubled between December 2023 and 2024.
She contrasted regulatory philosophies: the United States favors post‑market liability, often resulting in costly class actions after harm, while Europe imposes pre‑market access barriers that can stifle innovation. Mola stressed that effective governance is not a siloed privacy department but an integrated set of processes touching data, model development, testing, and ongoing monitoring.
The takeaway for businesses is clear: embed diverse teams and responsible data practices early, continuously test for disparate impacts, and adopt a holistic governance framework that bridges the gap between technical feasibility and regulatory expectations. Doing so mitigates reputational, legal, and societal risks while preserving AI’s competitive advantages.
Comments
Want to join the conversation?
Loading comments...