Sarvam AI Unveils Two New Language Models to Strengthen India’s Sovereign AI Push
New Delhi, Feb 2026 : Bengaluru-based startup Sarvam AI on Wednesday announced the launch of two new large language models, marking a significant step in India’s efforts to develop sovereign artificial intelligence capabilities. The announcement was made at the India AI Impact Summit, where the company outlined how the models have been built from the ground up to suit India’s scale, diversity, and multilingual needs.
Sarvam AI said both models are trained using a mixture-of-experts (MoE) architecture, a design approach that activates only a portion of a model’s parameters for each task. This helps improve performance while significantly reducing computational and inference costs. According to the company, efficiency and affordability remain central to its goal of deploying AI solutions at population scale across the country.
The first of the two models, Sarvam 30B, has a total of 30 billion parameters. However, for every output token generated, only about 1 billion parameters are activated. Co-founder Pratyush Kumar explained that this structure allows the model to deliver strong reasoning and thinking capabilities without the heavy cost typically associated with large models. He noted that Sarvam 30B performs competitively on reasoning benchmarks at both 8K and 16K context lengths when compared with other models in the same category.
Sarvam 30B supports a 32,000-token context window and has been trained on a massive corpus of around 16 trillion tokens. Kumar said this scale of training enables the model to handle complex instructions, long documents, and nuanced reasoning tasks, while remaining cost-efficient for real-world deployment in India.
Alongside this, Sarvam AI also unveiled a more advanced 105-billion-parameter model aimed at high-end reasoning and agent-based applications. This larger model activates around 9 billion parameters per output and supports a much longer 128,000-token context window. As a result, it can manage extended conversations, detailed workflows, and multi-step reasoning tasks more effectively.
Kumar compared the performance of the new 105B model with leading global systems, stating that on several benchmarks it outperforms DeepSeek’s DeepSeek R1, which was reported to have 600 billion parameters at launch. He also said the model is more cost-effective than Google’s Gemini Flash, while delivering better results on many evaluations. According to him, even when compared with Gemini 2.5 Flash, Sarvam’s model shows stronger performance on Indian language tasks.
The launch aligns with the broader objectives of the government-backed IndiaAI Mission, which is supported by a Rs 10,000 crore fund to promote domestic AI development and reduce reliance on foreign platforms. So far, the mission has disbursed Rs 111 crore in GPU subsidies, with Sarvam AI emerging as the largest beneficiary. The startup secured 4,096 NVIDIA H100 SXM GPUs through Yotta Data Services and received nearly Rs 99 crore in subsidies.
Sarvam AI was founded in July 2023 by Pratyush Kumar and Vivek Raghavan, both former members of AI4Bharat, an initiative backed by Infosys co-founder Nandan Nilekani. The company was earlier selected as the first startup to build India’s foundational AI model under the IndiaAI Mission, positioning it at the forefront of the country’s AI ambitions.
(The content of this article is sourced from a news agency and has not been edited by the Mavericknews30 team.)
South Korean Bithumb’s $44 Billion Blunder Lands Entire Digital Asset Ecosystem In The Hot Seat.
Seoul; February 2026: The erroneous transfer of Bitcoins worth 64 trillion won ($44 billio…







