AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosAI Still Hallunicates Can We Trust It, And To What Extent | Joshua Starmer X Data Science
AI

AI Still Hallunicates Can We Trust It, And To What Extent | Joshua Starmer X Data Science

•December 19, 2025
0
Data Science Dojo
Data Science Dojo•Dec 19, 2025

Why It Matters

Embedding confidence scores into AI outputs is essential for businesses to mitigate misinformation risk and make informed, data‑driven decisions.

Summary

The video centers on the persistent problem of AI hallucinations—instances where large language models generate plausible‑but‑incorrect information—and asks how much trust users can place in these systems. Joshua Starmer, speaking alongside Data Science, argues that while the technology will improve, the current lack of built‑in confidence indicators limits its reliability for critical tasks.

Starmer highlights two main points. First, the commercial incentive to curb hallucinations is strong: companies that want to monetize AI will need to address the stochastic nature of model outputs. Second, he proposes a practical solution—embedding confidence scores or uncertainty flags directly into responses so users can see which portions are high‑certainty and which require caution. He cites his recent talk at Carleton College where he outlined quantitative methods for measuring AI confidence, suggesting that such transparency would make the tools far more useful.

A memorable quote from the discussion underscores the idea: “instead of just giving you a response, it says this first part is high confidence that this is correct, this other part we’re going to highlight in red and maybe use with caution.” He also warns that historians or other specialists could be misled if AI presents a biased or inaccurate narrative, illustrating the broader risk of uncritical reliance on generated content.

The implication for businesses is clear: without explicit uncertainty metrics, AI‑driven decision‑making remains vulnerable to costly errors. Embedding confidence indicators could accelerate adoption in sectors such as finance, legal, and research, while also prompting regulators and vendors to standardize trust‑worthiness benchmarks.

Original Description

🎙️ Future of Data and AI Podcast: Highlight with Joshua Starmer (CEO & Co-Founder, StatQuest)
Can we trust AI when it can hallucinate?
In this clip, Josh breaks down why hallucinations aren’t the real problem—and what actually matters when using large AI models. From uncertainty and stochastic behavior to the need for confidence signals, he explains how AI could become more trustworthy over time.
💡 Key insight: AI doesn’t need to be perfect—but it does need to communicate uncertainty so people can use it more responsibly.
🎧 Watch the full episode: https://youtu.be/Q1_3nMe00co
🔗 Explore more about the podcast: https://datasciencedojo.com/podcast/
#JoshuaStarmer #statquest #aihallucinations #artificialintelligence #machinelearning #aitrust #responsibleai #aieducation #futureofdataandai #aipodcast
.
.
.
Learn data science, AI, and machine learning through our hands-on training programs: https://www.youtube.com/@Datasciencedojo/courses
Check our community webinars in this playlist: https://www.youtube.com/playlist?list=PL8eNk_zTBST-EBv2LDSW9Wx_V4Gy5OPFT
Check our latest Future of Data and AI Conference: https://www.youtube.com/playlist?list=PL8eNk_zTBST9Wkc6-bczfbClBbSKnT2nI
Subscribe to our newsletter for data science content & infographics: https://datasciencedojo.com/newsletter/
Love podcasts? Check out our Future of Data and AI Podcast with industry-expert guests: https://www.youtube.com/playlist?list=PL8eNk_zTBST_jMlmiokwBVfS_BqbAt0z2
0

Comments

Want to join the conversation?

Loading comments...