“Sovereignty” was one of the most commonly used terms during the AI summit. It is a political term that has been transposed to technological contexts to mean a range of nebulous things. What we need with AI or other critical technologies is not to do everything ourselves, but to have a resilient technology ecosystem and to make decisions about the technology we use based on India’s national interest. A maximalist version of sovereignty, where we build the entire AI stack indigenously, might not be within reach for India. However, an alternative path exists that focuses on open-source, use cases, and diffusion.
The Sovereignty We Want
Large generative AI models require massive resources to build and run. Just to put things in context, India’s entire IndiaAI Mission has a budget of around $1.2 billion over five years. Alphabet alone spent $91 billion on capital expenditures in 2025, with projections for next year almost double that. The past few years have also seen frontier AI labs burn through substantial capital, with most having no viable path to profitability. This has been funded by VC capital or other profitable business lines in a bid to capture market share. In addition, vertical integration across different stages of the AI supply chain, such as data, compute, models, and distribution channels, makes it more challenging for a new entrant to compete at the frontier.
Many leaders of frontier AI labs claim that we will have powerful AI systems that will far surpass human capabilities in the near future. One of them even suggests that this would be the equivalent of a “country of geniuses in a datacenter”. Many experts are sceptical of the short time horizons for superintelligent AI. However, even if the most dramatic predictions don’t come true, the productivity gains are real, and the advantages of being early movers are compounding. Consider how Claude Code has transformed how code is written, enabling practically anyone to build working software applications by describing what they want in plain English.
With orders-of-magnitude differences in investment and other structural challenges, market concentration in frontier generative AI models is likely, with a few companies dominating. A good outcome would be some competition in this space among a few companies from at least a few different countries, even if none of them is Indian. This might sound like giving up. It’s not. It is about recognising where the real opportunity lies.
Open Source As A Pathway
While brute force scaling of models and compute has been one pathway in AI progress, an alternate one pivots towards efficiency, reasoning, and adaptability, where how you use compute matters more than how much of it you have. Many open-source efforts operating under resource constraints are leading the way here. Lacking the financial resources of large tech companies, such models will typically lag behind frontier models, but that should be fine for most use cases. Open-source AI systems offer several advantages, such as control over your own data and the option to run them where you want.
Many strong open-source models from academia and research organisations haven’t gained traction with developers because they are not very accessible or discoverable. The IndiaAI mission has funded several AI models, but building a model is only half the job. If developers can’t seamlessly find and deploy these models, there won’t be much uptake. Efforts are required to make discovery and distribution as seamless as possible. Open-source projects have already shown that powerful AI models can run on ordinary consumer hardware, and that developers can switch between different AI providers without being locked in. Governments can support such efforts to keep the ecosystem competitive.
Further, “public money, public code” could be a guiding philosophy when funding AI models or procuring systems for government use. This enables the resulting models and tools to be openly available for any Indian developer or business to build on.
On Being The Use-Case Capital Of The World
The IndiaAI mission’s priority areas are agriculture, healthcare, and education. These sectors represent massive populations with real needs but low ability to pay, which means frontier AI companies optimising for revenue won’t build for them. Many of the initiatives funded under the mission have prioritised developing indigenous multilingual or multimodal generative AI models. Many of them also focus on small language models that can run on resource constraints, such as on edge mobile devices.
For instance, think of an application that helps a farmer understand which crops she needs to plant based on market factors or weather conditions, or one that can identify pests based on crop images and suggest remedies. Large language models are impractical for scaling such applications. The intelligence behind a pest identification app comes from specialised machine learning models trained on local data — local soil types, local weather patterns, local crop diseases. A small language model acts as the translator, letting the farmer talk to this system in her own language, by voice if needed. This might not be cutting-edge AI research, but it solves real-world problems.
The bottleneck to addressing such use cases is the data. In many cases, the datasets don’t yet exist and building them requires collaboration between researchers, domain experts, and the communities they serve. In addition, through procurement, the government could incentivise and see many such applications come to life.
The path to sovereignty for a middle power like India lies not in building the next frontier model but in making innovation accessible across sectors such as agriculture, healthcare and education. This could mean accessing a frontier model from an American lab, an open-source model that meets your specific needs, or an indigenous model that outperforms in certain contexts.