What the Repeal of the US AI Diffusion Framework Means for India and the World

The AI diffusion framework was passed by the Biden administration with the aim of maintaining US leadership in AI. The Bureau of Industry and Security under the Trump administration repealed it shortly before it was to come into force on 15th May 2025. 

Executive Summary

The AI diffusion framework passed by the Biden administration has been repealed as they were seen as unenforceable due to the compliance overheads and the negative impact it would have on US diplomatic relations with tier 2 countries (which includes India). 

While the framework itself has been rescinded, the export controls on China and other arms-embargoed countries have been made more stringent. This has been done following diversion tactics adopted by Chinese companies to circumvent the restrictions.

The repeal of the AI diffusion framework is a positive development for India from a strategic autonomy perspective. While Indian companies still had pathways to secure all the chips they needed, it would have negatively impacted the movement of future capital and talent.

The Chip Security Act, recently introduced by American lawmakers, is another disruptive policy instrument that could increase the complexity and costs of advanced chips. Such measures also raise concerns about surveillance and negatively impact the strategic autonomy of even countries with which the US has good relations.

While chips are a potential chokepoint for controlling AI diffusion due to extremely concentrated supply chains, their long-term effectiveness is debatable. AI development is highly distributed, and it is difficult to restrict access to the technology. A multilateral consensus based on open standards and voluntary disclosure offers a viable path forward.

What Was The AI Diffusion Framework

Compute usage is indicative of AI capabilities. Since 2015, the amount of compute used in large-scale models has been doubling roughly every 10 months and is 10 to 100 times higher than that used for training deep learning models. Advanced chips are produced via an extremely concentrated supply chain, and the quantum of compute used during training and inference is indicative of AI capabilities. This concentration creates a potential chokepoint for controlling AI diffusion. However, the long-term effectiveness of this chokepoint in containing AI diffusion is debatable, as will be discussed further.

The framework cleaved the world into three groups based on levels of trust that determine access and security requirements for importing advanced AI chips and certain AI model weights. The first tier included the US and 18 trusted allies like the UK and Japan, which had unrestricted access. The second included controlled-access countries like India, and the third included arms-embargoed countries like China and Russia, for whom access was restricted entirely.

Why The Diffusion Framework Was A Bad Idea

The order to rescind the framework clearly states two broad reasons for doing so: “These new requirements would have stifled American innovation and saddled companies with burdensome new regulatory requirements.  The AI Diffusion Rule also would have undermined U.S. diplomatic relations with dozens of countries by downgrading them to second-tier status.”

Burdensome Regulatory Requirements

The “small yard, high fence” approach of the US has expanded so much in scale and scope that the yard no longer remains small, and the effectiveness of the fence is also questionable. Pranay Kotasthane had argued that the diffusion framework violates Tinbergen’s Rule (one policy instrument, one target). It tried to use export controls as an instrument to achieve multiple objectives - maintaining U.S. leadership in AI development, managing national security risks, allowing American businesses to continue selling their wares globally, and achieving foreign policy goals.

Additionally, both BIS and the industry would find the filings, authorisations, licenses, and waivers required to comply with the regulations burdensome. Nvidia, which would have been seriously impacted, had criticised the rule, claiming it was cloaked as an “anti-China” measure but would harm US competitiveness and innovation without enhancing security.

Undermines US Diplomatic Relations

The diffusion framework would also have frayed diplomatic relations with tier 2 (and possibly even tier 1) countries, which the export controls treat as followers and not partners. Many countries would be forced to consider moving closer to either the US or Chinese orbits, reminiscent of Cold War-esque blocs. 

Questionable Long-Term Effectiveness

The framework also pushes countries to pursue alternatives that promote strategic autonomy. Open-source and open-weight AI models offer autonomy over the use of AI models. Open-weight models such as China’s Deepseek might seem like a more viable long-term option for some countries in such an environment. Additionally, the growing emphasis on building “sovereign” computing infrastructure in India, the EU, and elsewhere is a sign of such efforts to de-risk. 

Effectiveness of Export Controls on Chips

Efforts To Circumvent The Export Controls

Chinese companies, faced with sudden restrictions on AI chip access, have managed to acquire them by circumventing the export controls. For instance, Malaysia has seen a skyrocketing of imports of computing systems and computer parts, such as AI chips, during a timeframe coinciding with the export restrictions. Chinese companies are reported to be the primary clients of these data centres.

A U.S. select committee investigated the release of advanced AI models by the Chinese company DeepSeek and found that China is only about three months behind the U.S. in AI development. The committee accused DeepSeek of using restricted Nvidia chips for their model training. They also highlighted the possible use of model distillation techniques, which involve training a smaller model by replicating and extracting reasoning capabilities from a larger and more powerful model, thus reducing costs.

With stricter enforcement of export control restrictions on chips and model weights, we can expect an increase in industrial espionage, cybersecurity breaches, and shady deals as the stakes for acquiring top-tier AI models continue to rise.

Implications Of Deepseek

The release of DeepSeek R1, which was trained on a fraction of the compute utilised for training OpenAI’s frontier models, has made it clear that Chinese companies can continue making progress on building advanced AI models. This doesn’t mean that AI model development has escaped the necessity of scaling computing resources. However, working under constraints has led to several algorithmic and architectural breakthroughs, allowing more efficient compute utilisation.

In the past months, it has also become clear that massive step changes in intelligence or reasoning have not happened. The latest models from OpenAI or Anthropic, while impressive, are not a drastic step change from their previous versions. Increasingly, the focus seems to be shifting to agentic models that interact with the world and do things. As a result, inference-time compute, which does not require as advanced chips as training-time compute, is now more important. We might assume that export controls on chips alone will not be an effective instrument for restricting progress on AI development.

Long-term Strategy To Reduce Technology Interdependence

The export controls push China towards a strategy of reduced technological dependence on the US. Chinese chips are one or two generations behind Nvidia chips. Huawei’s Ascend 910b reaches 280-400 TeraFLOPS, compared to 2,250 TeraFLOPS for Nvidia’s Blackwell chips. However, with a strong state push for developing domestic capabilities across the semiconductor value chain, China might be able to narrow the gap. China’s push for the adoption of open-source instruction set architectures (ISA) such as RISC-V is another such de-risking initiative, as it reduces the dependence on licensing an ISA from Intel or Arm. 

What Replaces the Framework

The export controls introduced by the Biden administration in October 2022 had subsequently been tightened in the following years (1 and 2). US allies also joined in enforcing similar export controls, further limiting the transfer of chips and related technology to China. In March 2025, the Trump administration expanded the scope of the export controls and added several companies to the blacklist. While the AI diffusion framework has been rescinded, other new guidelines have been introduced to strengthen the enforcement of export controls. Some new provisions are being considered, which will introduce on-chip features for monitoring and enforcing restrictions on the usage of advanced AI chips.

Policy Measures Introduced Recently

Guidance On The Prohibition Of The Use Of Chinese Chips

The BIS has issued a warning to persons and companies, both domestic and abroad, about using advanced semiconductor chips from arms embargoed companies (including China, Macao, Russia, and others). Specific Huawei chips from the Ascend series have been mentioned. The guidance states that these chips were made in ways that violated US export control laws. It cautions that using them without prior BIS authorisation could invite substantial criminal and administrative penalties.

Enforcing the restrictions on Huawei chips will be extremely challenging. The administrative resources needed to monitor and implement these controls are substantial. While the threat of penalties might deter most law-abiding companies, determined (and possibly malicious) actors will find ways to circumvent the export controls. Additionally, this framework encourages companies to look for loopholes in the export controls or redesign their chips to bypass them.

Guidance On The Prohibition Of The Use Of U.S. Chips For Training Chinese AI Models

In addition to existing export controls, the BIS has issued guidance that restricts exports, reexports or transfers of advanced AI chips to Infrastructure as a Service providers (such as data centres) if they are suspected of using these items to train AI models for companies in arms embargoed countries (including China, Macao, Russia, and others). Not complying with these requirements could invite substantial criminal and administrative penalties. 

Guidance To U.S. Companies On How To Protect Supply Chains Against Diversion Schemes

The BIS has issued a set of red flags and due diligence actions to protect against the PRC’s use of diversion schemes to bypass export controls on advanced chips. The transactional and behavioural red flags include customers who never purchased chips before 2022, sudden increases in procurement quantities, companies with little or no online presence, customers linked to companies on the entity list, etc. Due diligence best practices are recommended to include more exhaustive KYC processes and checks to ensure that the chips will not be diverted or used for unauthorised purposes.

Policy Measures Under Consideration

A bipartisan group of U.S. lawmakers have introduced the Chip Security Act, which would require advanced AI chips to have built-in location tracking and reporting capabilities. This is intended to stop the diversion schemes that are being used to bypass the export controls on chips. Chip manufacturers and exporters would be required to report to the U.S. government if a chip is diverted or tampered with. Other security measures might also need to be implemented in the future.

This law is problematic for many reasons. The technical feasibility and burden on industry of such measures would be a significant challenge and would add additional complexity and costs to the chips. This also introduces privacy and surveillance concerns for consumers. Countries that are not adversaries of the US would also be concerned about the impact on their strategic autonomy. It undermines US diplomatic relations, similar to the AI diffusion framework. 

Hardware-enabled governance mechanisms are a similar feature under policy discourse in the US, and they are a bad idea. It involves special features built into chips that limit what the chip can do, restrict certain use cases, and enforce rules directly at the hardware level. While the promise of embedding export controls directly into the hardware might sound exciting to some, it has many problems. The controls limit all kinds of uses and not just risky ones. Motivated malicious actors might be able to find ways to bypass the controls. In addition, it faces the same challenges with the Chip Security Act, such as undermining the autonomy of users and increasing the complexity and costs of chips.

Implications For India

India was unfavourably placed in tier 2 of the AI Diffusion Framework. This placement meant that there were some thresholds on the number of AI chips that India could procure. However, the hard caps on the number of chips were far higher than the current demand in the Indian compute ecosystem. In other words, India could have gotten all the chips it wanted in the next few years under the framework. 

Additionally, the framework had many second-order effects against Indian prerogatives. The framework would have undermined India’s goal of maintaining a level of strategic autonomy in its technology ecosystem. The possibility of Indian Global Capability Centres (GCCs) being denied access to advanced AI chips as country thresholds were reached or having to seek authorisations for purchasing chips is not an ideal situation. This could have led to tier 1 countries becoming the favoured destination for building cutting-edge technologies over tier 2 countries. These constraints would have led to a loss of talent and investment from India to tier 1 countries.

Given these realities, the government’s emphasis on building sovereign compute via GPU procurement warrants reconsideration. Indian companies are now free to procure all the chips they need, and the government could focus on other priorities. Union Minister of Electronics and IT, Ashwani Vaishnaw’s, recent comments about Indian efforts towards developing indigenous GPUs by 2029 could be seen as a response to US attempts at maximising control of the technology.

Despite the repeal of the framework, it is clear that this is an era of export controls for AI chips. The geopolitical climate will undoubtedly continue to dictate the ebb and flow of AI chip access. In its aspirations for strategic autonomy, India has a reason to develop a baseline capability in making its own advanced AI chip ecosystem. However, India is heavily reliant on foreign-made software and machinery. This makes the prospect of India rapidly developing these capabilities on its own highly unlikely. The extent to which the US and its allies will help India in this regard will not only depend on India-US relations but also India’s own position in the AI value chain. 

Conclusion

DeepSeek demonstrated that hardware alone will not determine the winner in any AI, AGI, or ASI competition. The true victor will be the one who successfully combines computing hardware with the right algorithmic and software innovations. This implies that any future government attempting to control the spread of AI or any emerging technology will need to find a way to manage the dissemination of ideas and code.

If we draw parallels between nuclear proliferation and AI diffusion, history teaches us that ideas and theories tend to spread. While we may be able to regulate the pace of this dissemination, ultimately, it becomes accessible to everyone. Many AI experts predict that as AI development comes under greater government control, advanced AI models resembling AGI will operate in secrecy or under strict regulations. If this occurs, it would splinter the world into two tiers in terms of access to AI, one with the highly capable AI systems and the other with open-source AI models functioning at significantly lower capabilities than the leading models.

As a foundational technology with high diffusion potential, AI has developed in the open so far. Unlike other critical technologies, such as nuclear weapons or the Internet, which were primarily developed by governments, significant advancements in AI have been driven by civilian applications. Attempts at controlling further diffusion through hardware and software controls are likely to fail. With a highly distributed development, the likelihood of industrial espionage, cybersecurity breaches, and shady deals is higher as the stakes for acquiring top-tier AI models continue to rise. As countries find their way around any limiting framework and forge their own paths, it will become even more challenging to arrive at a global consensus on what the world should and should not do with AI. Instead of policy instruments that aim to achieve unilateral AI dominance, the world needs a multilateral consensus based on open standards and voluntary disclosure.

Further Reading

Authors

Next
Next

Operation Sindoor: Mapping the Points of Operational Impact