- August 18, 2021 Technology
Beijing’s Approach to Trustworthy AI Isn’t So Dissimilar from the World’s
As artificial intelligence (AI) systems proliferate, a cottage industry of catch-phrases and new initiatives has sprung up around shaping the impacts of the technology: AI for Good, Responsible AI, Human-Centered AI, and Trustworthy AI. Most of these initiatives have roots in American or European institutions, but China recently made a significant foray into the field with the release of its first high-profile white paper on “trustworthy AI”.
Of the various AI-and-society initiatives, trustworthy AI is one of the most technical. While there is no widely agreed upon definition or framework for trustworthy AI, most efforts tend to focus on ensuring that AI systems do what they’re intended to do: that they’re robust, reliable, transparent, and explainable.
The technical nature of the topic is reflected in the Chinese white paper, which was co-authored by a research lab at e-commerce giant JD and the China Academy for Information and Communication Technology (CAICT). The CAICT is an influential government think tank that is subordinate to—and often represents the position of—China’s Ministry of Industry and Information Technology.
The publication of such a white paper in China, which goes into more technical detail than some of its predecessors, is important in and of itself. As such, its takeaways offer clues on how the Chinese government is thinking about the more technical aspects of AI regulation:
- The paper’s definition of trustworthy AI largely conforms with international definitions
- China is paying close attention to the global conversation on AI governance
- Technical testing and evaluation will be central to trustworthy AI in China
The paper’s definition of trustworthy AI largely conforms with international definitions
The white paper’s framework and guiding principles on developing trustworthy AI are represented below (see Figure 1). They include five key “characteristics of trustworthiness”: reliable and controllable, transparent and explainable, data protection, delineation of responsibility, and diversity and inclusiveness.
Figure 1. Chinese Framework for Trustworthy AI
Source: CAICT and JD Exploration Research Institute. Translated by author.
On face value, these principles are very similar to other frameworks offered for trustworthy AI—in fact, it maps almost exactly onto such a framework put out by the consulting firm Deloitte last year. It also aligns closely with the OECD’s definition for trustworthy AI.
Noting this similarity isn’t meant to denigrate the Chinese framework. Rather, this similarity illustrates important channels of communication and influence between China and the outside world.
China is paying close attention to the global conversation on AI governance
That external influence on Chinese thinking is further reflected in the white paper, particularly in a section titled “The Whole World Attaches Great Importance to Trustworthy AI.” It is akin to a literature review of the dozens of international initiatives on this topic: the G20 AI Principles, White House executive orders, European AI regulations, and projects at IBM and the Linux Foundation.
The close attention paid to global trends presents an opportunity for those who hope to nudge China’s approach to AI ethics in a different direction. Viewed from the outside, China’s AI governance landscape—particularly its approach to ethics—can look like an increasingly insular space. And given the combative nature of US-China relations today, Americans criticizing China’s use of AI often struggle to gain real traction within the country.
But having these debates outside of China—debates about surveillance, privacy, and safety—allows these concepts to filter back into China and often gets digested by its policy community. This type of external influence is indirect, and unlikely to curb many of the most egregious AI-driven violations of human rights. Still, it remains an important node of connectivity in an increasingly balkanized technology landscape.
Testing and evaluation will be central to trustworthy AI in China
Among the many mechanisms the white paper offers for advancing trustworthy AI, one of the most promising is the emphasis on testing and evaluation.
The creation of technical benchmarks for AI systems has been an important ingredient in the field’s progress. Contests like the ImageNet Challenge both spurred and measured progress in computer vision, while benchmarks such as SuperGLUE have done the same for natural language processing. Creating analogous benchmarks for concepts like explainability or robustness could help drive progress in the same way.
The white paper describes testing and evaluation as a key piece of achieving trustworthy AI goals. More compelling than this mention in the paper alone is the fact that one of the institutions behind the paper, CAICT, has already begun a testing and certification program for some AI systems. In April of this year, CAICT launched its first official “Trustworthy AI Evaluation” program, calling on Chinese AI companies to submit their products for testing.
Given how slippery many concepts in trustworthy AI are today, these evaluations will likely be crude for now. But over time these tests will likely be strengthened, forming what could be an important cornerstone of both Chinese domestic and international AI governance structures.
Matt Sheehan is a Fellow at MacroPolo. You can find his work on tech policy, AI, and Silicon Valley here.
Stay Updated with MacroPolo
Get on our mailing list to keep up with our analysis and new products.
Subscribe