AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsDeepSeek Just Dropped Two Insanely Powerful AI Models that Rival GPT-5 and They're Totally Free
DeepSeek Just Dropped Two Insanely Powerful AI Models that Rival GPT-5 and They're Totally Free
AISaaS

DeepSeek Just Dropped Two Insanely Powerful AI Models that Rival GPT-5 and They're Totally Free

•December 1, 2025
0
VentureBeat
VentureBeat•Dec 1, 2025

Companies Mentioned

Google

Google

GOOG

OpenAI

OpenAI

Hugging Face

Hugging Face

NVIDIA

NVIDIA

NVDA

Anthropic

Anthropic

Apple

Apple

AAPL

Why It Matters

Free, frontier‑level models could erode premium API revenues and give enterprises a cost‑effective alternative, while geopolitical tensions may restrict deployment in regulated sectors.

Key Takeaways

  • •DeepSeek releases 685B V3.2 models, open-source MIT license.
  • •Sparse attention cuts inference cost 70% for 128k token windows.
  • •Speciale variant wins gold at IMO, IOI, ICPC World Finals.
  • •Benchmarks show parity with GPT‑5 and Gemini‑3.0‑Pro on math, coding.
  • •Regulatory hurdles in EU and US may limit enterprise adoption.

Pulse Analysis

The AI landscape is entering a new phase as DeepSeek’s open‑source release demonstrates that cutting‑edge performance no longer requires proprietary, pay‑per‑use models. By publishing two 685‑billion‑parameter systems under an MIT license, the Hangzhou‑based firm challenges the business model of U.S. giants such as OpenAI and Anthropic, offering developers the ability to run frontier models on‑premise or in private clouds without hefty API fees. This shift could accelerate innovation among startups and enterprises that have previously been priced out of the most advanced language‑model capabilities.

At the heart of DeepSeek’s breakthrough is Sparse Attention, a technique that trims the quadratic scaling of traditional transformers. The approach isolates the most relevant context fragments, slashing inference costs by roughly 70% for sequences up to 128,000 tokens—equivalent to a 300‑page book—while preserving accuracy on long‑document tasks. Coupled with a “thinking in tool‑use” capability that maintains reasoning across multiple external calls, the models excel in complex, multi‑step problems, as evidenced by gold‑medal performances at the IMO, IOI and ICPC World Finals and competitive scores on coding benchmarks like SWE‑Verified.

Despite the technical triumph, adoption faces headwinds. European data‑protection authorities have flagged DeepSeek’s cross‑border data flows, and U.S. lawmakers are considering bans on government devices, reflecting broader security concerns around Chinese AI. These regulatory pressures could limit the models’ appeal for sensitive applications, yet the cost advantage and open‑source flexibility may still drive widespread uptake in less regulated sectors. As the AI race intensifies, DeepSeek’s strategy underscores a pivotal question: can open‑source, efficient models reshape market dynamics and dilute the dominance of established U.S. providers?

DeepSeek just dropped two insanely powerful AI models that rival GPT-5 and they're totally free

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...