Calling for global AI legislation at the AI Action Summit

Authored by Craig Smith, Lecturer in Law, University of Salford’s Business School.

The Artificial Intelligence (AI) Action Summit in Paris this week marks a critical moment in global AI governance. The gathering of 80 nations shows international momentum for this significant technological advancement; it also underscores the persistent uncertainty over who truly shapes AI’s future. While many countries recognise the need for cohesive AI governance, the competing interests of national governments, private industry, and global institutions make coordination complex and, at times, contentious.

Governments assert themselves as primary stakeholders in the discussion on AI regulation, rightly so, given their control over vast datasets and their ability to wield regulatory frameworks, produce laws, and stir economic growth in the field of AI. Yet, the real power dynamics remain contested and are starting to show conflict between states and technology companies. The divide is becoming particularly pronounced as private entities drive AI innovation at an unprecedented pace, often outpacing regulatory efforts. This friction is evident in ongoing debates about data access, intellectual property rights, and the ethical deployment of AI in both commercial and governmental applications.

The DeepSeek effect

China’s rapid deployment of DeepSeek AI highlights how state-backed initiatives can rival, and even unsettle, dominant players like OpenAI. The release of DeepSeek has already had a tangible impact on financial markets, with the U.S. stock market reacting swiftly before rallying again. This underscores the economic weight of AI developments and their ability to shift market dynamics almost instantaneously. Meanwhile, Europe, despite its regulatory strides with the EU AI Act, lacks a homegrown generative AI leader of the same calibre. While the EU AI Act shows a clear aim and playing field for AI, it has not provided the space for a key player to emerge in the private sector. Regulation without a robust innovation ecosystem risk leaving Europe dependent on AI solutions developed elsewhere, particularly from the U.S. and China.

The launch of Stargate

In the United States, AI development is increasingly tied to corporate interests, with government policy leaning towards facilitating private sector growth rather than direct intervention. Under a second Trump presidency, this trend appears poised to accelerate. On the 21January, Reuters reported that Trump announced a private sector investment of up to $500 billion to fund infrastructure for artificial intelligence, aiming to outpace rival nations in this critical technology sector.

Trump stated that OpenAI, SoftBank, and Oracle are planning a joint venture called Stargate, which is set to build data centres and create more than 100,000 jobs in the states. Equity backers of Stargate have also committed $100 billion for immediate deployment, with further investments expected over the next four years.

At the launch event, SoftBank CEO, Masayoshi Son, OpenAI , Sam Altman, and Oracle Chairman, Larry Ellison credited Trump with enabling the initiative, highlighting the increasing entanglement of political and corporate interests in AI’s future. The first of the project’s data centres, each half a million square feet, is already under construction in Texas. According to Ellison, these centres could power AI applications in healthcare, such as analysing electronic health records to assist doctors. This is also a key feature of the UK AI Strategy published in January 2025, where the Prime Minister drew attention to AI powered scans that support doctors to detect diseases and how AI can cut NHS waiting lists through better scheduling.

Enabling economic growth and driving a positive impact on society

While there is a lot of promise and potential for AI, the level of investment will influence the key players in the AI field. Therefore, rather than directly regulating AI development at the national level, incentivising corporate led advancements that enable economic growth and positive impact on society is a far better message than the stale message often accompanying legislation. While such action may accelerate AI through infrastructure expansion, it also raises questions about equitable access, data privacy, and the role of government oversight in ensuring that AI serves broader societal interests, rather than consolidating power within a few major corporations.

Furthermore, it raises concerns around environmental impact and long-term sustainability. Regulation must not become a blunt instrument that stifles AI development or entrenches existing power structures.

Sustainable AI governance

The UK’s AI Safety Summit in 2023 and France’s current initiative reflect a growing recognition that sustainable AI governance must be collaborative, adaptable, and globally coordinated. Sustainability itself is multifaceted, beyond environmental concerns, it demands AI frameworks that serve the public good while fostering responsible innovation. This includes ensuring that AI is developed with fairness, accountability, and transparency at its core, preventing monopolisation by a few powerful entities.

The framing of AI progress as an ‘arms race’ risks obscuring a more pressing reality: the need for an unprecedented level of international cooperation. This is not merely about technological competition, but about establishing a regulatory foundation that balances economic growth, ethical imperatives and global stability.

If the Paris summit succeeds in advancing this agenda, it will mark a step towards governance that does not simply react to AI’s disruptions but actively shapes its trajectory. Without such proactive measures, AI governance risks being dictated by the most powerful actors, be it states or corporations, rather than by an equitable and globally representative framework.