Do the AI Diffusion Rules mean India cannot get more than 50,000 advanced GPUs?

Earlier this month, the outgoing Biden-Harris Administration introduced the 'Framework for Artificial Intelligence Diffusion' aimed at controlling the global spread of advanced AI capabilities. The framework regulates the global distribution of advanced AI chips which are necessary to develop and run AI models. From an Indian perspective, understanding the mechanisms of the framework that govern India’s access to AI chips is critical for policymakers. 

India can procure AI chips under controlled conditions, using four pathways under the framework:

1. National Validated End User (VEU) Authorisation Mechanism;

2. One-Time Licences Mechanism (Based on Country-Specific Limits); 

3. Licence Exemptions Mechanism (For Small-Scale Deployments);

4. General VEU Authorisation Mechanism.  

Note: The General VEU Authorisation Mechanism is not included in the flowchart. It is dealt with in the section below.

1. National VEU Authorisation Mechanism

For Indian entities, the National VEU (NVEU) mechanism is the cornerstone of AI chip procurement. Indian companies can apply for the NVEU Authorisation to import and deploy a capped number of AI chips in data centres. Using Nvidia’s popular H100 AI chip as a stand-in for AI compute prowess, the framework allows NVEUs to deploy up to 100,000 H100-equivalent chips by 2025, up to 270,000 by 2026 and 320,000  chips cumulatively by 2027. However, the framework does not merely limit the physical number of chips. The physical number of chips that can be deployed by an NVEU will be determined by calculating their Total Processing Power (TPP). For instance, in 2025, an Indian NVEU-authorised entity can deploy up to 100,000 H100s, but if they were to field the two times more powerful NVIDIA B200s (with ~2.2x the TPP of the H100), they would be limited to an estimated 43,500 chips instead. This added complexity will force policymakers and industry to balance chip quality and quantity: prioritising high-performance chips like the B200 could exhaust India’s TPP allocation faster, constraining future flexibility.

To grasp the scale of these limits, consider global benchmarks. The world’s largest public AI cluster, xAI’s Colossus supercomputer facility, operates 100,000 H100s. The 2027 NVEU cap for an Indian entity (320,000 H100-equivalents) would theoretically permit an Indian cluster three times the size of the Colossus—a significant capacity. Meta’s Llama 3 training cluster uses 350,000 H100s (346.5 billion TFLOP/s).  Yet India’s current AI infrastructure remains modest: currently, the IndiaAI mission aims to deploy 10,000–18,000 GPUs nationwide, far below the framework’s thresholds. 

The framework does not simply restrict GPUs. It can potentially also restrict any other type of AI chips if they are sufficiently powerful. Converting between chip types to remain within the framework's TPP thresholds demands real-time tracking and coordination within the industry and by the government - a task requiring a degree of regulatory capacity India currently lacks. As chip efficiency improves, India must navigate trade-offs between deploying cutting-edge hardware and preserving quota longevity. 

The framework encourages governmental agreements for NVEU Authorisations. The U.S. government will likely seek assurances on cybersecurity and foreign policy alignment with the U.S. Approval of NVEU Authorisations will likely hinge on these government-to-government arrangements given the substantial amount of computing power 320,000 H100s represent for a single company. For perspective, China’s Deepseek operated a cluster of 10,000 older NVIDIA A100 chips and is reported to have used ~50,000 H100-equivalents in training its cutting-edge R1 AI model. Suppose the American concern is to ensure that frontier AI development progress worldwide is aligned with its strategic interests. In that case, it's implausible that any company wishing to apply for NVEU authorisation will get approval without governmental assurances between the US and India. Therefore, continued bilateral engagement between the U.S. and India will be necessary for predictable NVEU authorisation.

NVEU compliance also requires adherence to stringent safeguards: cybersecurity standards (FedRAMP High), physical security protocols aligned with U.S. Department of Defence norms, and supply chain independence from countries like China. Beyond NVEU, the framework offers three supplementary routes for Indian entities:

2. One-Time Export Licences Mechanism (Based on Country-Specific Limits) 

Independent of NVEU quotas, this mechanism permits shipments of 50,000 H100-equivalents until 2027, expandable to 100,000 with a U.S.-India government agreement. AI chips shipped before 2025 do not count towards this limit. Licences are granted on a first-come, first-served basis, ideal for urgent projects or for creating supplemental capacity. However, the 100,000 cap is cumulative across 2025 to 2027 - and does not refresh annually. The framework has not identified limits for years beyond 2027 yet. It will likely do so in the coming years based on how the AI industry’s and U.S. security goals evolve. This may hinder long-term planning for chip acquisitions for Indian entities.

3. Licence Exemption Mechanisms (For Small-Scale Deployments)

Small-scale imports of 1,700 H100-equivalents/year by an Indian company are granted licence exemptions for non-frontier AI development, such as scientific research or drug discovery. These exemptions avoid bureaucratic hurdles but are insufficient for training advanced models, ensuring scrutiny remains focused on high-risk deployments.

4. General VEU Authorisation Mechanism

The General VEU (GVEU) authorisation mechanism is a legacy pathway from 2007. GVEU allows dual-use exports for civilian or military purposes. While theoretically available to India, its relevance is unclear post-India’s accession to the Wassenaar Arrangement. The framework does not explicitly address GVEU, leaving ambiguity around its interaction with NVEU caps.

In a nutshell, the framework’s architecture generously rewards alignment with U.S. strategic interests. Closer Indo-U.S. cooperation could see India’s limits under the one-time export licence caps relaxed, while lax compliance (such as allowing re-exports of chips through Indian entities to Russia) might trigger more restrictions. This creates dependencies: Indian entities that build data centres as NVEUs will also entrench themselves in the U.S.-led ecosystem, making transitions to alternative AI hardware and software ecosystems like that of China’s Huawei costly and impractical.

There are challenges still. Implementing FedRAMP High cybersecurity standards—routine for U.S. cloud giants—may strain smaller Indian data centre operators. Similarly, disentangling from Chinese supply chains (an NVEU requirement) can complicate partnerships in other Indian sectors where dependencies on China persist. 

A rough path forward for India to advance the acquisition of much-needed compute resources while aligning with US security norms will involve a few things. In the short term, New Delhi should prioritise NVEU authorisations for strategic AI projects since this is the main pathway through which India can demonstrate itself as a reliable partner to the US. Negotiating government-to-government assurances through the iCET and Quad to expand compute access is also a critical imperative; this will mean assisting the fledgling Indian AI industry invest in compliance infrastructure to meet US standards. In the long term, India must lay the foundations for seriously exploring more open AI hardware alternatives like those revolving around the open-source RISC-V architecture.

This article also appears here as a newsletter post on Technopolitik.

Authors

Next
Next

Digging Deep: Tunnel Boring Machines and India’s China Challenge