Back to all

Can Industry Be the Backseat Driver of International AI Governance?

Originally published on INHR.org.

AI systems do not respect national boundaries. Just as an AI breakthrough in one country is likely to propel innovation in another, an AI incident in one would likely spread harm across borders. No one country or stakeholder group has – or should have – all the solutions to AI’s risks. Rather, we need all countries to be AI governance superpowers, especially in the new and risky field of AI and biosecurity. International coordination is essential to diffuse technological progress and to prevent regulatory gaps or technical vulnerabilities that may result in accidents, or that bad actors might exploit.

Yet, governance and oversight of AI remain patchy and fragmented across the world. Secretary General of the International Telecommunication Union, Doreen Bogdan-Martin, noted during the AI For Good opening address that only 32 countries currently possess sufficient computing capacity to train AI models, and about 85% of countries have no national AI policy at all. AI developers are looking for clear direction but no cross-cutting themes of AI governance exist across even leading AI powers. As Artemis Seaford, Head of AI Safety at ElevenLabs, put it at the AI For Good Summit, developers and startups find themselves “caught between a rock and a hard place,” facing the ‘rock’ of uncertainty about national policy and the ‘hard place’ of fragmented international rules.

In other words, industry wants clear and consistent direction to assure them that they will not run afoul of national policy, and standards that harmonize across borders. These policies should signal that any AI system deemed safe and compliant today would still be compliant tomorrow; and one deemed safe in one country might be recognized as such elsewhere. Without international standards, each country, region, and company could end up implementing its own ad-hoc safety measures.

Recognizing this, several expert panels at AI For Good called for collaborative development of common standards, testing protocols, and evaluation frameworks for AI that all nations can agree on and adopt. This will require addressing technical uncertainties, navigating geopolitical tensions, and flattening disparities in capacity while keeping a clear focus on the shared interest of AI safety and accountability worldwide. And while  countries do not agree on which AI risks matter most or what kind of instruments should be implemented to mitigate them, there does appear to be broad international agreement that some policy ought to exist. So, why don’t we have a coordinated approach in place? One panel of international experts identified four primary characteristics of AI which make collaborative development of governance challenging: 

  1. Time: AI capabilities develop faster than AI policy.
  2. Uncertainty: It is unclear which AI capabilities and risks will be highest priority.
  3. Geopolitics: Each country has a different appetite for regulation and a different position in the global AI race. Strategic competition adds further challenges.
  4. Concentration of Power: A few countries and companies currently dominate AI capacity, capabilities, and regulatory potential, creating conflicts of interest.

My sense is that time and the concentration of power are the two challenges we can do the least about. The rate of AI progress is as unpredictable as the distance between today’s AI capabilities and achieving artificial general intelligence (AGI), while the economic elements of AI development all but assure concentration of power. In an effort to be constructive, I’ll consider the challenges of uncertainty and geopolitics here.

Geopolitics: governments may disagree on the details, but AI safety has momentum

The United States is not interested in managing AI development by European standards, as was made clear in Vice President Vance’s speech at the Paris AI Action Summit earlier this year. The European Union walked back some of its own AI safety policy since that meeting, ostensibly to maintain hope for frontier model relevance. U.S. officials at the AI For Good Centre Stage in Geneva also dug in on keeping AI technology from adversaries and rejecting regulations that would ‘strangle AI in its crib.’ On the same stage, Chinese experts expressed incredulity that they are perceived as adversaries despite frequent participation in UN processes. 

Even with those frictions, AI safety has grown into a prominent movement among universities, companies, civil societies and governments. Calls for safe and responsible governance dominated last week’s AI For Good Conference. National actors like China’s Academy for Information and Communications Technology (CAICT) are working with the ITU to support international standards. With the U.S. now essentially only focused on national AI strategy, companies at least acknowledge the global dimension in their responsible scaling policies and readiness frameworks. 

Regulations are necessary, but several experts at AI For Good stressed that safety standards will be more useful if they are conceived by practitioner consensus instead of government. As Arnaud Taddei, Chair of ITU Study Group on Trust and Security said, “standardization is born of the moment when we realize we need fewer solutions because they begin creating frictions in the market and we lose money.” Rivalry between nations alone will never be sufficient to totally prevent AI standards if companies are financially obliged to develop them.

Beyond financial motives, there are scientific ones. UC Berkeley Professor Dawn Song added that “we realize that bringing everybody together to the table is necessary to have unified conversation and make progress.” She and other leading AI researchers published a proposal called “A Path to Evidence Based Policy,” proposing a path based in science, not one reliant on global partnerships. Science builds toward a common human knowledge, even when conducted apart, their proposal argued – as long as we truly understand what each is trying to say. 

Uncertainty: common terminology and identification as first steps toward clarity

The absence of shared terminology is a foundational element of the uncertainty in this field. As an IEEE representative observed, loose concepts like ‘trustworthiness’ can become buzzwords that “don’t always produce comprehensible expectations for technicians” or standards bodies. Taddei added that frustratingly, ‘trust’ and ‘trustworthiness’ do not mean the same thing. You might ‘trust’ something that doesn’t deserve it or mistrust something that is ‘worthy’ of trust. Translating from English only further convolutes this exercise. 

It is easy, however, to see why trust is a loaded term. Ronald Reagan famously said ‘trust but verify’ in reference to U.S.-Soviet arms control agreements. This shows how humans might conflate trust and faith, choosing to disregard reasonable suspicions to trust an untrustworthy adversary. But a computer system that engages in trust through faith is not secure. Here, ‘verify then trust’ is the preferred approach. So, what does trustworthiness mean?

Most people treat trusted friends differently than distant colleagues. Likewise, machines ought to treat less familiar machines more guardedly, and identify themselves when necessary. AI models should be selective in their acceptance of calls from APIs; their treatment of encrypted data; and their approach to sharing or withholding information. Some activities conducted by and between AI agents should require licensing. Alan Chan of the Centre for AI Governance posits that IDs might be required for AI agents to conduct large financial transactions, for example.In addition to signifying trustworthiness to access certain information or complete certain tasks, AI IDs could link agents to a country of origin, a developer, an industry, a company operator, or a specific employee. This opens the door to verification of provenance and to attribution of responsibility – a feat of which AI is not yet capable.

It won’t be easy, but we can resolve some of the uncertainty in the meaning of terms and AI agent identities. Coming up with uniform testing and evaluation processes is possible, too; we at INHR have already done so. What might the next steps be in forging effective international cooperation on AI standards, testing, and evaluation? Experts and policymakers have begun sketching out a roadmap:

  1. Clarify and publish “red lines” for AI. Many suggest an international process to agree on a set of unacceptable high-risk AI behaviors or uses. For example, companies might volunteer that if their model tried to autonomously design bio-weapons, they would shut it down. This is something U.S. firms currently do, and some criticize Chinese AI labs for falling short of this.
  2. Establish transparency and accountability norms. Building trust will require companies to come forward with information about their AI models. Anthropic and OpenAI publish ‘model cards’ and this practice could extend to ‘agent cards’ or candor about safety risks.
  3. Reach global standards on testing and evaluation. Countries can cooperate by sharing testing methodologies and results. For instance, if one country’s lab uncovers a vulnerability or misalignment in an LLM, that information should be disclosed. Chinese regulators already require model registration domestically, a practice which could be replicated elsewhere to improve accountability and transparency.

Conclusion: progress on safety and standards as a signal that progress is possible 

It would be naïve to attribute all international conflict (around AI or any other issue) to misunderstanding, but misunderstanding amplifies uncertainty and uncertainty constrains our ability to solve the AI safety problem. AI can neither be developed nor aligned by any single country or company, and governance of this great tool for human betterment should be likewise handled collectively. 

The ITU is doing important work by hosting the nations of the world, but the hard work ahead remains for the world’s inventive engineers. Although the task is daunting, the journey becomes more manageable when countries and their citizens work to understand one another and cooperate. Early but fundamental steps toward developing standards and testing in AI are surely a good sign.