
The AI Action Summit in Paris during the second week in February 2025 was a departure on how the international community is approaching artificial intelligence (AI). The two previous international summits concentrated on AI safety while this conference focused on AI adoption and was designed to encourage investment in national AI activities. The summit declaration was signed by 64 countries, but both the United States and the United Kingdom declined to sign on to the statement.
The Paris Declaration on Ethical AI
The Paris AI Action Summit was a follow-up to the U.K.’s AI Safety Summit (November 2023) and the AI Seoul Summit (May 2024). The previous events focused on AI safety and risk management. This summit concentrated on innovation and investments around AI, its impact on culture and creativity, environmental sustainability, and to improve AI accessibility.
This was a large event with over 1,000 participants from more than 100 countries. Such a large group generated considerable interest and ultimately promises for future AI sector investment but the final statement was short on concrete actions in order to gain consensus from a majority of the participants. Ultimately 64 countries and regional organizations — including France, Germany, China, India, Japan, Australia, and Canada — signed the declaration that presents an “approach that will enable AI to be human rights based, human-centric, ethical, safe, secure and trustworthy while also stressing the need and urgency to narrow the inequalities and assist developing countries in artificial intelligence capacity-building so they can build AI capacities.”
Some of the main priorities affirmed within the declaration are to promote AI accessibility, make innovation in AI thrive, encourage AI deployment that positively shape the future, and reinforce international cooperation. There is a commitment to foster international cooperation on AI governance. Concrete actions identifying how to meet the objectives are comparatively few.
US and UK's Decision to Abstain
Conspicuously missing from the list of signees was the U.S. and U.K. Although neither country signed on to the declaration, they did so for different reasons. U.S. Vice President JD Vance, in his speech at the summit stated that Europe’s propensity for excessive regulation “could kill a transformative industry”. He emphasized that deregulation and “pro-growth” AI policies are the way forward. He also rejected content moderation as "authoritarian censorship". Vance emphasized that the focus needs to primarily be on the positive improvements AI technology can provide.
The summit statement did not lead to advancements on the issue of AI safety and risks. The lack of progress was a reason the U.K. did not sign on to the declaration. A U.K spokesperson said it “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”
Balancing Innovation and Governance
The growth of AI technology has created concerns that it needs to have guardrails to prevent it from creating negative impacts associated with jobs, energy, privacy, and accessibility. The focus over the recent years have been an attempt to clamp down on potential AI abuses but with a change in administration, the U.S. will push a policy where safety, which requires deep regulations, is not the primary focus but instead the focus will be on accelerating innovation and finding ways for the technology to advance.
“This growing Atlantic AI Rift is a wakeup call for any organization looking to deploy or operate global AI solutions”, said Andrew Bolster, Senior R&D Manager at Black Duck. He added that the mix of public and private requirements and a heightened threat model is driving a need for more AI-aware security solutions.
Cybersecurity Impact Analysis
International gatherings such as the AI Action Summit are useful for stakeholders to dialog on issues associated with AI technology and to reach some level of consensus on the issue. However those are generally a macro view but at a micro level relative to cybersecurity, the summit and its declaration does not have an immediate impact. Instead the concern for cybersecurity professionals is what is happening on the ground.
Threat actors are using AI to improve their ability to attack IT systems. Cybersecurity solutions need to match those malicious threats. “Looking ahead, the potential for attackers to utilize AI tools to exploit vulnerabilities in open-source software and automatically generate exploit code is a looming threat.”, said Venky Raju, Field CTO at ColorTokens. “Even closed-source software is not immune, as AI-based fuzzing tools can identify vulnerabilities without access to the original source code. Such Zero Day attacks are a significant concern for the cybersecurity community.”
Successful cybersecurity in an environment where they are being attacked with AI-enabled tools requires defense with equal capabilities. Cybersecurity capabilities should not be hampered by overt regulations that tie the hands of defenders.
Conclusion
The AI Action Summit signaled that countries are moving from safety and risk to action on AI development. The realization is that AI models and capabilities will continue to develop and will be widely used. Governance and regulations need to be developed, but it is uncertain how those will be developed and enforced. What is clear is that the international community is looking at the positive aspects of AI.
There are many reasons for the adoption of AI, but the technology can also be abused by cybercriminals, so the technology must also be utilized to improve cybersecurity.Pathlock’s CEO Piyush Pandey summed this up by saying “AI can already positively impact the cybersecurity field far beyond the simple automation of tasks. From intelligent response automation to behavioral analysis, and prioritization of vulnerability remediation, AI is already adding value within the cybersecurity field.”
Irrespective of what governments are doing, organizations need to utilize AI-enabled cybersecurity to prevent the threats created by AI-powered cybercriminals. Malicious actors don’t adhere to the guidance governments create, so countermeasures will not be handicapped. The move to foster innovation could be a positive development for AI-enabled cybersecurity.