AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosSarvam AI Unveils Two New Foundation Models at India AI Summit 2026
AI

Sarvam AI Unveils Two New Foundation Models at India AI Summit 2026

•February 20, 2026
0
Analytics Vidhya
Analytics Vidhya•Feb 20, 2026

Why It Matters

Open‑weight, high‑performance models from an Indian startup give enterprises a cost‑effective, locally controlled alternative to dominant foreign LLMs, accelerating AI adoption across the region.

Key Takeaways

  • •Sarvam AI launches Server 30B, 1B parameters per token.
  • •Server 30B trained on 16 trillion tokens, 32k context.
  • •Server 105B activates 9B parameters, 128k context window.
  • •Both models will be open-weight on Hugging Face soon.
  • •Targeted at real‑time AI, enterprise reasoning, and tool use.

Summary

At the India AI Impact Summit 2026 in New Delhi, Indian startup Sarvam AI announced two new large‑language foundation models—Server 30B and Server 105B—marking the country’s first home‑grown releases of this scale.

Server 30B runs with only one billion active parameters per token, was pre‑trained on 16 trillion tokens spanning code, web, multilingual and math data, and supports a 32 k token context window, positioning it for low‑latency conversational and high‑throughput applications. The larger Server 105B activates nine billion parameters per token, offers a massive 128 k context window, and is engineered for complex reasoning, coding, scientific tasks, tool integration, and enterprise‑scale deployments.

The presenter emphasized that both models will be released as open‑weight checkpoints on Hugging Face, with API access slated for shortly after, underscoring Sarvam’s commitment to open‑source collaboration. The announcement was framed as a call to action: “Are you just attending the AI wave or are you building with it?”

If the models deliver on their promises, they could accelerate India’s AI ecosystem, provide domestic alternatives to foreign LLMs, and enable local enterprises to embed advanced language capabilities without licensing constraints, reshaping competitive dynamics in the global AI market.

Original Description

At the India AI Impact Summit 2026, Sarvam AI unveiled two new models – Sarvam-30B and Sarvam-105B – built with a Mixture of Experts architecture for efficiency and power.
Sarvam-30B activates 1 billion parameters per token and supports a 32K context window, ideal for real-time AI and high-throughput tasks. It was pretrained on 16 trillion tokens across code, web, and math data.
Sarvam-105B activates 9 billion parameters per token with a 128K context window, designed for complex reasoning, coding, mathematics, and enterprise use.
Both models will be open weight on Hugging Face soon with API access to follow.
Sarvam AI is not just part of the AI conversation — it’s leading the way in building sovereign foundation models.
0

Comments

Want to join the conversation?

Loading comments...