AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosWorkshop: Transformer Models with @SerranoAcademy | Future of Data and AI | Agentic AI Conference
AI

Workshop: Transformer Models with @SerranoAcademy | Future of Data and AI | Agentic AI Conference

•December 8, 2025
0
Data Science Dojo
Data Science Dojo•Dec 8, 2025

Why It Matters

Understanding transformer limitations and how to augment them with agents and retrieval‑augmented generation is essential for businesses seeking to deploy reliable, scalable AI solutions without costly hallucination errors.

Summary

The workshop hosted by Luis Tirano at the Agentic AI Conference provided a deep‑dive into transformer models, focusing on their architecture, practical strengths and weaknesses, and emerging techniques such as Retrieval‑Augmented Generation (RAG) and autonomous agents. After a brief introduction and interactive guidelines, Tirano walked participants through a visual, step‑by‑step explanation of how transformers process text one token at a time, illustrating why the models excel at tasks like summarization, code generation, and poetic composition while often stumbling on humor and factual accuracy.

Key insights emerged from a live audience poll: participants highlighted the models' fluency, ability to follow instructions, and impressive code‑generation capabilities, contrasted with chronic hallucinations, over‑confidence, and an inability to admit uncertainty. Tirano used these observations to explain the core trade‑off between discriminative (predictive) AI—exemplified by multiple‑choice style tasks—and generative AI, which must create novel content token by token. He argued that the token‑by‑token generation process underlies both the impressive mimicry of style (e.g., Shakespeare‑like poems) and the frequent production of nonsensical jokes, because the model lacks a true grounding in factual truth.

Illustrative examples peppered the session: a participant asked the model for a joke, receiving a cringeworthy punchline, while the same model produced a surprisingly coherent poem about machine learning. Tirano leveraged these contrasts to introduce the concept of agents that can orchestrate multiple calls to a language model, decompose complex prompts, and retrieve external knowledge to mitigate hallucinations. He also demonstrated how RAG pipelines can augment transformers with up‑to‑date information, turning a pure generative model into a more reliable information‑retrieval system.

The workshop concluded with hands‑on labs where attendees built simple agent workflows, reinforcing the notion that effective AI deployment now hinges on combining the raw generative power of transformers with retrieval, prompting strategies, and safety layers. For enterprises, the take‑away is clear: while large language models can accelerate content creation and coding tasks, they must be wrapped in robust orchestration frameworks to ensure factual correctness and controllable behavior.

Original Description

In this hands-on workshop, learn transformer models with Luis Serrano to gain a clear, visual understanding of transformer architectures through engaging explanations and coding exercises – no prior machine learning background required. During the session, attendees will:
- Explore the foundational components of transformer models, including embeddings and the attention mechanism.
- Understand how modern techniques like Retrieval-Augmented Generation (RAG) and AI agents build upon transformers.
- Participate in a hands-on codelab to solidify concepts through practical implementation.
#agenticaiconference #transformermodels #luisserano
------
👉 Learn more about Data Science Dojo:
https://datasciencedojo.com/
👉 Explore video tutorials:
https://datasciencedojo.com/tutorials/
👉 See community feedback and success stories:
https://datasciencedojo.com/data-scie...
At Data Science Dojo, we believe data science is for everyone. Our in-person and virtual training programs have helped 10,000+ professionals from 2,500+ companies — including Microsoft, Apple, and Meta — apply AI responsibly and effectively.
🔗 Subscribe to our newsletter for more AI tutorials and events:
https://datasciencedojo.com/newsletter/
0

Comments

Want to join the conversation?

Loading comments...