Open Source AI Becomes a Strategic Priority
It is quite interesting that open-source and open-weight AI have prominently featured in the AI Action Plans of both the US and China in the past week.
Here’s what’s in the US Action Plan
Encourage Open-Source and Open-Weight AI Open-source and open-weight AI models are made freely available by developers for anyone in the world to download and modify. Models distributed this way have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider. They also benefit commercial and government adoption of AI because many businesses and governments have sensitive data that they cannot send to closed model vendors. And they are essential for academic research, which often relies on access to the weights and training data of a model to perform scientifically rigorous experiments. We need to ensure America has leading open models founded on American values. Opensource and open-weight models could become global standards in some areas of business and in academic research worldwide. For that reason, they also have geostrategic value. While the decision of whether and how to release an open or closed model is fundamentally up to the developer, the Federal government should create a supportive environment for open models.
Recommended Policy Actions • Ensure access to large-scale computing power for startups and academics by improving the financial market for compute. Currently, a company seeking to use largescale compute must often sign long-term contracts with hyperscalers—far beyond the budgetary reach of most academics and many startups. America has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities. Through collaboration with industry, NIST at DOC, OSTP, and the National Science Foundation’s (NSF) National AI Research Resource (NAIRR) pilot, the Federal government can accelerate the maturation of a healthy financial market for compute. • Partner with leading technology companies to increase the research community’s access to world-class private sector computing, models, data, and software resources as part of the NAIRR pilot. • Build the foundations for a lean and sustainable NAIRR operations capability that can connect an increasing number of researchers and educators across the country to critical AI resources. • Continue to foster the next generation of AI breakthroughs by publishing a new National AI Research and Development (R&D) Strategic Plan, led by OSTP, to guide Federal AI research investments. • Led by DOC through the National Telecommunications and Information Administration (NTIA), convene stakeholders to help drive adoption of open-source and open-weight models by small and medium-sized businesses.
And in China’s Action Plan for Global Governance of AI, translated by Sinocism
We will give full play to the roles of governments, industries, academia, and platform mechanisms of all countries to jointly promote international exchanges and dialogue on AI governance. We aim to build cross-border open-source communities and secure, reliable open-source platforms; promote open sharing of foundational resources; lower the barriers to technological innovation and application; avoid duplicate investments and resource waste; and improve the accessibility and inclusiveness of AI technologies and services. We will promote the establishment of open-source compliance systems, clarify and implement technical security guidelines for open-source communities, open up development resources such as technical and interface documentation, strengthen compatibility and interoperability of upstream and downstream products, and foster an open-source ecosystem enabling the free flow of non-sensitive technical resources.
It also found a mention in the Indian PM’s speech at the Paris AI Action Summit in February 2025:
“We must develop open-source systems that enhance trust and transparency.”
So this might be the one true consequence of DeepSeek. Now, governments have realised that the genie is out of the bottle; there’s no point in trying to control foundational models. So they are now jumping on the bandwagon instead.