In July, the UN Security Council held its 9,381st meeting and the very first specifically focused on artificial intelligence. As Secretary-General António Guterres noted, the technology has the potential to turbocharge global development but can also “help people to harm themselves and each other, at massive scale.”
In order to avoid that fate, the Council had invited a briefing from Zeng Yi, a professor at the Institute of Automation at the Chinese Academy of Sciences and the director of the International Research Center for AI Ethics and Governance in Beijing.
From a giant video monitor, Zeng told the Security Council a story about a young boy who had asked him whether AI could be used to help a nuclear bomb blow up an asteroid headed for Earth, thus saving humanity.
The boy’s idea, Zeng said, was “at least using AI to solve problems for humankind,” but Zeng still advised against it, citing the destruction that AI-empowered nuclear weapons could unleash. The UN, he urged, needs to “play a central role to set up a framework on AI for development and governance to ensure global peace and security,” because humans “should always maintain and be responsible for final decision-making on the use of nuclear weapons.”
With his salt-and-pepper hair and gentle demeanor, Zeng has emerged as the face for China’s campaign for international collaboration on AI governance. Although AI promises to unlock countless advantages and opportunities, it is also seen as something of a Pandora’s Box, potentially posing existential threats to humanity. As Zeng made clear, its use in autonomous weapons could revolutionize warfare and upend the international order, and Beijing has repeatedly pointed to the UN as the best arena for setting global guardrails.
Zeng did not respond to The Wire’s requests for comment, but he helped develop UNESCO’s Recommendation on the Ethics of Artificial Intelligence adopted by member states in 2021, and on Thursday, he was named as one of 39 experts on the UN’s high-level advisory body on AI governance.
The 41 year old also sits on several government committees that shaped China’s recent domestic regulations in the industry — which are some of the strictest in the world on specific AI systems. In August, Beijing imposed a new law on generative AI that builds on 2021 rules for recommendation algorithms as well as a 2022 regulation on deep synthesis.
China’s State Council has also announced that a broader regulatory framework is in the works. Although the actual legislation may take years, scholars from the Chinese Academy of Social Sciences (CASS) released a proposal in August, suggesting the formation of a new agency to oversee AI technology.
As Zeng said at a UN panel in September, “100% of the AI systems need security, safety and ethics frameworks. We don’t really have an option.”
Yet even though many analysts note there is genuine eagerness from the Chinese side to engage the international community on AI, especially from Zeng, China’s words at the UN about “the well-being of all mankind” can be hard to square with its actions at home.
Numerous human rights groups have criticized Beijing for using AI for social control, including developing facial recognition technology tools to profile, track and monitor ethnic minorities. Such practices run against the UNESCO principles China endorsed, which explicitly ban the use of AI for mass surveillance.
The U.S. government has taken unprecedented steps to “de-couple” its AI ecosystem from China’s, citing such human rights abuses and arguing that Beijing can’t be trusted to integrate AI into its military operations. After severely restricting China’s ability to import the semiconductor chips that power AI last October, the Commerce Department took things a step further and strengthened export controls to plug loopholes in the curbs earlier this month.
Those in favor of the export bans argue that China’s attempts to build guardrails for AI can’t be trusted, and that it’s time for the U.S. to step up. “The U.S. must take the lead in developing global AI standards that uphold human rights and democratic values,” Geoffrey Cain, a senior fellow at the Foundation for American Innovation, testified before a U.S. Senate hearing in June.
So far, the U.S. has let the majority of AI regulation discussion play out at the industry level with 15 top AI companies signing on to the Biden administration’s voluntary commitments to develop safe, secure and trustworthy AI. But on Monday, the White House is expected to release an executive order on AI, which will call for extensive checks on the technology and require assessments of AI models before federal workers can use them.
A large U.S. delegation will also travel to Bletchley Park this week, where the British government is hosting its AI Safety Summit. The UK’s decision to include China in the summit drew considerable pushback at home and from key allies, including the U.S., the EU and Japan.
“[The Summit] is going to be a test case for China’s willingness and ability to engage constructively from the American and allies’ perspective,” says Jacob Stokes, a fellow at the Center for a New American Security.
With China seemingly ahead of the curve now, it will be interesting to see how others may borrow from its precedent… and who will triumph over setting the global gold standard for AI governance.
Emre Kazim, co-founder of Holistic AI, a London-based platform provider for AI risk management
It also underscores that the race to develop AI is happening alongside the race to regulate it — with the latter potentially having profound impacts on the former. Just two weeks before the UK AI Safety Summit — and the day after the U.S. announced its most recent chip controls — China unveiled its own Global AI Governance Initiative, a move that some say was an attempt to pre-empt the UK.
“With China seemingly ahead of the curve now,” says Emre Kazim, co-founder of Holistic AI, a London-based platform provider for AI risk management, “it will be interesting to see how others may borrow from its precedent, how East-West relations on AI converge or diverge, and who will triumph over setting the global gold standard for AI governance.”
STOP AND GO
In May of 2017, a Google computer program beat Ke Jie, the 19-year-old Chinese world champion in the Chinese board game of Go. The defeat so bruised national pride that authorities banned live broadcast of the match.
But Beijing had a plan to strike back. Two months later, it released an ambitious national AI strategy that declared China would become a global AI innovation center by 2030 with an industry worth more than 1 trillion yuan ($147.80 billion).
AI, the scheme predicted, would become a new focus of international competition, and in order to protect China’s national security, the country would take steps to guide its development strategically. While the military and commercial benefits of AI are obvious, Beijing’s strategy was notable for its recognition of AI’s applications in governance as well.
“Essentially the Chinese government sees AI as an enabling technology across the board of what it tries to achieve,” says Rogier Creemers, an assistant professor in law and governance at Leiden University. “It sees it as a key element in finding that third wave of growth, but also in non-economic objectives, such as environmental protection, governance and social services.”
Creemers cites, for instance, China’s use of AI to automate the legal system and help overworked judges deal with repetitive cases, as well as its application of machine-learning to monitor soil and groundwater at smart farms.
To realize its vision, China has poured money into AI research and development, and has successfully leapfrogged many other countries. It now leads the world in quality and quantity of AI papers published, according to a Nikkei study in January, and it is second only to the U.S. in the Global AI Index, which rates 62 countries according to their investment, innovation and implementation.
China’s domestic tech giants are also now seen as global leaders in various AI fields. Xpeng and BYD, for instance, rival Tesla when it comes to autonomous driving; Alibaba’s City Brain, which uses real-time data to manage traffic and reduce congestion, has been implemented in 23 cities across Asia; and Infervision, which deploys deep-learning technology in medical imaging to diagnose lung diseases, worked with hospitals in Japan and Italy to detect COVID. Tencent even developed its own Go algorithm, Jueyi (or Fine Art), which beat Ke Jie time after time.
China’s private sector advances, however, have also drawn scrutiny — both at home and abroad.
In an early sign of U.S.-China technological decoupling, in 2021, Washington sanctioned SenseTime, which was renowned for its facial recognition programs. Although MIT had lauded SenseTime as “a tremendously successful, technologically impressive startup” with investments from Qualcomm Inc and the Softbank Group, the U.S. government alleged that Chinese authorities were using the software to conduct mass surveillance of the Uyghur population.
According to Jeffrey Ding, an assistant professor of political science at the George Washington University, the U.S. quickly became concerned about the dual use applications of AI innovations like facial recognition or computer vision. Such technologies, he notes, could easily translate into military AI applications, but they could also “prop up and make authoritarian regimes more resilient and more repressive.”
AI innovations, however, can cut both ways for authoritarian regimes. In 2018, the Chinese Communist Party was caught off guard by Bytedance, which had developed a powerful machine-learning algorithm for its news aggregator app Toutiao. With 120 million daily active users by the end of 2017, the app struck the party as too influential.
“Toutiao was taking away the party’s power to dictate what was the top news item for the day,” says Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “Every user was getting their own personalized feed, and that feed was based on their interests and not on the interests of the Chinese Communist Party.”
To regain control over the information ecosystem, Beijing released its first AI provisions, which came into effect in March 2022, stating that recommendation algorithms cannot be used to influence online public opinion and directing providers to spread “positive energy.”
Some analysts have argued that China’s urge to censor could ultimately slow down its AI ambitions. Its chatbots, for instance, may not be as powerful as those in the West if they are either not trained on as many sources or if they are forced to screen and censor outputs. So far at least, Western services like ChatGPT and Bard are seen as head-and-shoulders above Chinese generative AI systems, like Baidu’s Ernie Bot and SenseTime’s SenseChat.
But Ding, of George Washington University, notes that if precedent is any guide, the censorship setback could be but a temporary one for China.
“China has clearly shown that it can keep up with the technological frontier and a variety of social media platforms, even in a censored information environment,” he says.
Beijing has also demonstrated surprising flexibility when it comes to balancing state priorities with innovation culture. When the Cyberspace Administration of China released a draft of regulations on generative AI in April, for instance, it received considerable pushback from AI developers and experts behind closed doors. The draft demanded providers ensure both training data and model outputs are “true and accurate” — a requirement that analysts say is almost impossible to meet, considering generative AI models are typically trained on massive datasets often scraped from the internet.
“[The draft] was considered a major setback for the industry, both for the domestic industry and potentially for global innovation,” says You Chuanman, director of the Centre for Regulation and Global Governance at Chinese University of Hong Kong’s Shenzhen Campus. It led to a “silent period,” as Chinese firms halted the release of their models over compliance fears, even as their competitor ChatGPT drove the market into a frenzy.
You was among scholars and industry representatives who voiced their concerns at a week-long meeting with regulators during the consultation period. After collecting feedback, Chinese authorities eventually removed some of the most stringent requirements. They made clear the rules apply only to services offered to the public and not to ones for enterprises, significantly narrowing their scope. The regulation instructs providers to “take effective measures” to enhance quality of the training data and accuracy of the content generated. Fines of up to 100,000 renminbi ($14,000) for violations were also lifted.
China’s regulatory strategy potentially gives Chinese entities an edge over their U.S. counterparts, who are grappling with escalating legal battles…
Angela Zhang, director of the Center for Chinese Law at the University of Hong Kong
Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology (CSET), says that the softened tone shows that the Chinese government is trying to maintain “enough control over the AI systems in question so that they don’t say things that the Communist Party doesn’t want them to say, while also not totally squashing industry’s ability to experiment with this technology and find productive and profitable ways to use it.”
It is a delicate balance. Further guidelines published earlier this month proposed blacklisting sources of training data that contain “more than 5 percent of illegal and harmful information,” such as material that advocates violence or damages the country’s image.
While China’s iterative and targeted approach is unique, multiple analysts said it may end up having more success than other countries, which are trying to get their arms around all of AI at once. The EU, for example, is stuck in the process of finding a framework comprehensive enough to catch up with the evolving technology.
“[China] gets a bunch of different bites at the apple when it comes to regulation,” Sheehan notes, “and they get to learn from and build upon what they’ve done before. That approach is much more likely to succeed than trying to pass one AI regulation in late 2023 that won’t need to be tweaked as each new technological development emerges.”
Angela Zhang, director of the Center for Chinese Law at the University of Hong Kong, says that Beijing’s proactive regulatory moves could even help create an environment more conducive to AI development. Under the U.S.’s laissez faire approach, she notes, OpenAI, Google, Microsoft and other leading AI companies are now facing a slew of lawsuits, spanning from copyrights infringement and data-privacy violations to defamation and labor disputes.
“China’s regulatory strategy potentially gives Chinese entities an edge over their U.S. counterparts, who are grappling with escalating legal battles, raising their regulatory compliance costs,” Zhang told The Wire.
At stake in the battle to get domestic regulation right is more than just the most popular chatbot. As Stokes, of the Center for a New American Security, notes, it is nearly impossible to separate civilian uses of AI from military uses. Whoever has the commercial edge, he says, will likely have the military edge as well.
“A lot of the innovations that are happening in AI are coming from the civilian sector and then being applied to military purposes,” he says. “This is an inverse situation from what we saw in the Cold War, where a lot of cutting-edge capabilities were developed for the military and government labs and then eventually trickled out to the commercial sector.”
Yet, even at the height of the Cold War, the U.S. and the Soviet Union found ways to collaborate on technological governance.
OUT OF THE GATES
In introducing the UN’s High-level Advisory Body on AI on Thursday, UN Secretary-General António Guterres said the group “will work fast, because we are against the clock.”
“Without entering into a host of doomsday scenarios, it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion, and threaten democracy itself,” he said.
By the end of the year, the Body is supposed to submit recommendations for the international governance of AI. One plausible model, analysts say, is the jurisdictional certification approach jointly proposed by scholars, nonprofits and Microsoft, which requires states to agree on a set of minimal standards for civilian use of AI, but gives them leeway to implement their own domestic legislations.
Borrowing elements from groups such as the International Civil Aviation Organization, it suggests the formation of an International AI Organization that works with national regulators to determine if state jurisdictions are complying with international oversight standards. Countries would have incentives to participate as it gives them access to an international market with consistent regulations.
“It takes into account the interests of a range of actors including frontier AI states and non frontier AI states. On the other hand, it also allows jurisdictions a certain amount of agency and control themselves, so it tries to strike a balance,” says Robert Trager, a lead author of the proposal and the international governance lead at the Centre for the Governance of AI.
Others note that China’s Global AI Governance Initiative, albeit preemptive, marks an opening for collaboration at this week’s UK Summit since it partially aligns with the objectives the British government has laid out.
“For instance,” says Sihao Huang, a researcher on AI governance at Oxford, “China talks about model frontier AI misuse by terrorist groups, creating an evaluation system for AI and a tier-based risk management framework, which are all very similar to what the UK AI taskforce is doing right now.”
China and the EU, adds Creemers of Leiden University, have also had “parallel evolutions” on questions of AI ethics, including concerns about the technology going rogue or falling into malicious use. “In some areas of market regulation and ethics, Brussels is actually closer to Beijing than it is to Washington,” he says.
But, he adds, the similarities are often for very different reasons. While China’s data privacy law, for instance, has often been compared to the EU’s General Data Protection Regulation where it pertains to companies and consumer rights, the Chinese government itself is not subjected to the same limitations on collecting and accessing the private data of its citizens.
And it is precisely those carve-outs for security and police services that make the West suspicious of China’s engagement with AI governance. “There may be more common ground in actuality,” Creemers says, “but the point is, no one wants to do a deal with China right now on anything.”
For China to change that, says Huang, it will need to demonstrate the extent to which “it actually prioritizes security over, for instance, getting access to advanced AI.”
“There are clear areas of shared interest in global safety and security,” he says. “The question really is whether China is interested in compartmentalizing these issues and actually working together with other countries — or if China is going to try to tie together other grievances, like access to advanced chips, to obtain leverage over the West.”
This remains to be seen, but so far China hasn’t wasted any opportunities to take thinly-veiled jabs at the U.S. and its export controls. In its statement announcing the Global AI Governance Initiative, Beijing said, “We also oppose creating barriers and disrupting the global AI supply chain through technological monopolies and unilateral coercive measures.” And at the UN Security Council meeting on AI, a Chinese delegate stated that a certain developed country builds “exclusive small clubs” and “maliciously obstructs the technological development of other countries.”
AI’s increasing role in economic and security competition will likely exacerbate these tensions. Huw Roberts, a researcher at the Oxford Internet Institute, argues that a country’s priority is to shore up its own position with AI, which will further undermine mutual trust.
Instead of a new global body on AI, Roberts expects a fragmented ecosystem with smaller inter-governmental and private initiatives. “At the moment, it’s a kind of mess. And this is obviously problematic because it lets countries pick and choose what initiatives they want to follow,” he says.
One of the new UN’s high-level advisory body’s goals, Guterres said, is to bridge existing and emerging initiatives. But few are holding their breath.
Jonas Tallberg, a political scientist at Stockholm University who is directing a project on global governance of the technology, says the same political gridlock that has gripped the UN’s work on cybersecurity and lethal autonomous weapons will likely stand in the way of reaching an agreement with teeth. “That’s the fear, that either it’ll be bogged down and blocked, or alternatively, so watered down that it becomes essentially meaningless,” he says, warning that the real world consequences will be profound and transnational.
In the past, we’ve only seen international governance arrangements of powerful technologies after major accidents, such as nuclear safety. That tipping point has not been reached with AI yet.
Jeffrey Ding, an assistant professor of political science at the George Washington University
“Any effort at regulating the development and use of AI also has to be an international collective endeavor,” he says. “Otherwise, AI companies will move to locations with less regulation in order to escape any rules that they would find restrictive on their activities.”
In the end, it might take a catalyst to overcome the lack of political will.
“In the past,” says Ding of George Washington University, “we’ve only seen international governance arrangements of powerful technologies after major accidents, such as nuclear safety. That tipping point has not been reached with AI yet.”
Maybe one day, when an asteroid is heading towards Earth, it will be.
Rachel Cheung is a staff writer for The Wire China based in Hong Kong. She previously worked at VICE World News and South China Morning Post, where she won a SOPA Award for Excellence in Arts and Culture Reporting. Her work has appeared in The Washington Post, Los Angeles Times, Columbia Journalism Review and The Atlantic, among other outlets.