State of AI Governance, 2024

A report analysing AI governance measures across countries, companies, and multistakeholder gatherings

Published April 16th, 2025

About the State of AI Governance Report

The rapid progression in artificial intelligence capabilities has far-reaching impacts. It has the potential to significantly boost economic productivity, disrupt labour markets, and alter the balance of power between countries. The widespread diffusion of this general-purpose technology presents significant governance challenges that are amplified by the ongoing geopolitical competition for AI dominance.

It is against this backdrop that the Takshashila Institution, an independent centre for research and education in public policy, presents its inaugural State of AI Governance Report. This report provides a systematic comparative analysis of AI governance approaches across different countries, revealing their strategic priorities. Further, the effectiveness of corporate self-regulation initiatives and the progress of multistakeholder collaborative efforts are also analysed. It concludes by offering predictions in these areas in the coming year.

This annual report will track key developments, analyse trends and offer informed predictions for the AI governance environment. It is intended to provide policymakers, analysts, and interested citizens with insights that help navigate the evolving AI governance landscape.

Executive Summary

This report analyses AI governance in three contexts – countries, companies and multistakeholder gatherings.

Countries

  • Being at the forefront of AI innovation, the US favours a pro-market regulatory environment while prioritising geopolitical considerations to maintain its competitive advantage.

  • The EU has focused on creating a comprehensive regulatory framework that prioritises transparency, accountability, and the protection of individual rights.

  • China's approach prioritises national security and favours heavy state control in enforcing regulations and driving innovation.

  • India has opted for a light-touch regulatory environment while investing in developing Indigenous AI models for Indian use cases.

Companies

  • Many companies are proactively establishing principles, guardrails and transparency and disclosure norms that guide how they build or use AI. These initiatives sometimes go beyond what is strictly expected by the regulatory environment in which they operate and are intended to build trust with users.

  • However, reporting on AI governance efforts by companies is not standardised, and the details of specific initiatives vary considerably. It is also unclear to what extent these efforts involve meaningful external scrutiny. The report analyses the governance initiatives of a few companies operating at different parts of the AI value chain.

Multistakeholder Gatherings

  • Various multistakeholder gatherings, such as the AI Summits and the Global Partnership on AI, have been formed to raise awareness and coordinate AI governance efforts among different countries.

  • Although most of these groupings do not have binding commitments or backing from all members (for instance, the US and EU refusing refused to sign the declaration on inclusive and sustainable AI at the AI Action Summit in February 2025). However, they serve as a platform to highlight important concerns and drive convergence in AI governance efforts.

Predictions

The report concludes with the below predictions on what we can expect in AI governance in the coming year.

  • Global: Compute thresholds for enforcing regulations will no longer be relevant. The effectiveness of these thresholds might be challenged as a measure of capability as inference computing begins to scale and smaller models become more efficient. The US and EU have 10^26 and 10^25 flops as training compute thresholds for enforcing certain regulations.

  • Global: Investments in sovereign cloud infrastructure will increase, driven by geopolitical considerations.

  • Global: AI governance regulations at the state level will continue to prioritise innovation over encouraging transparency, accountability, and societal well-being. In other words, geopolitical considerations will trump the protection of individual rights as a governance priority.

  • India: The compute capacity created under the IndiaAI mission aimed towards incentivising startups and building indigenous models will be underutilised. This is due to lack of demand that meets the criteria to qualify for the subsidies as well as due to friction in the bureaucratic process involved.

  • India: The regulatory focus will be on protecting the information ecosystem from content deemed harmful to the government or Indian society.

  • US: US chip restrictions on China will not escalate further. This is because Deep Seek makes owning newer chips less relevant.

  • US: Federal laws focused on monitoring AI safety and federal agency assessment of AI for discrimination and bias will be made defunct or watered down significantly. By the end of the year,  AI safety guardrails will be driven by private firms.

  • China, EU: Open-source and open-weight models will continue to be pushed by China and EU as a pathway to strategic autonomy and technology leadership. DeepSeek and Mistral will remain open-weight or open-source.

  • EU: The EU’s comprehensive regulatory framework, including penalties for non-compliance, would result in a few companies not releasing their AI models/features in the EU. This might lead to a milder enforcement of the regulations. As per the declared timeline, rules on notified bodies, general purpose AI models, governance, confidentiality, and penalties start to apply from August, 2025. 

  • China: The US Diffusion Framework will not stop state-of-the-art AI models coming out of China, at least not in the next year. This is because Deep Seek makes owning newer chips less relevant, and China has built up an overcapacity of data centres over the past few years.

  •  Corporate: The governance of AI within companies will become a bigger requirement as governments firm up their positions regarding AI. ‘Chief Responsible AI Officer’ will be a new role at companies seeking to deploy AI solutions at scale, whose duty it will be to ensure AI is deployed in a manner that will, at the very least, protect them from litigation.

Authors

Next
Next

Satellite Internet Explained: How It Works & Why It Matters