AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosShould You Trust ChatGPT With Your Data? | Jerry Liu X Data Science Dojo
AI

Should You Trust ChatGPT With Your Data? | Jerry Liu X Data Science Dojo

•December 1, 2025
0
Data Science Dojo
Data Science Dojo•Dec 1, 2025

Why It Matters

This distinction affects legal exposure and competitive risk: mishandling proprietary or classified data can lead to lawsuits, financial loss, or regulatory issues, while everyday users face limited practical risk. Choosing the right model (public vs. enterprise) and adopting simple guardrails can materially reduce organizational liability.

Summary

Speakers argue that for most individual users, uploading personal or mundane documents to ChatGPT (or similar tools) poses minimal risk because OpenAI does not broadly use such data traces for model training. However, companies and users handling highly sensitive, classified, or legally consequential information should avoid dumping that data into public models and instead use enterprise offerings that provide stronger privacy guarantees and guardrails. The panel notes widespread casual use—students and individuals often upload assignments and arbitrary files—so users should exercise caution mainly around proprietary or regulated content. The core recommendation is pragmatic: public models are generally fine for ordinary data, but not for sensitive corporate materials.

Original Description

🎙️ Future of Data and AI Podcast: Highlight with Jerry Liu (CEO & Co-Founder, LlamaIndex)
Should you trust ChatGPT with your data? Jerry Liu breaks it down.
In this highlight, Jerry explains how modern AI systems handle user data, what actually gets stored, and why understanding data flows is crucial before pasting sensitive information into any AI tool. He clarifies common misconceptions, privacy boundaries, and what organizations should keep in mind when using LLMs for real-world work.
💡 Key takeaway: AI tools aren’t inherently risky — but you need to know how they treat your data before you trust them.
Watch this clip to understand the real story behind data privacy in ChatGPT and other LLMs.
🔗 Watch the full episode: [Insert Link]
🎧 Explore more episodes: https://www.youtube.com/playlist?list=PL8eNk_zTBST_jMlmiokwBVfS_BqbAt0z2
0

Comments

Want to join the conversation?

Loading comments...