Google China 2.0 and the Ethics of AI Engagement

On December 13th, Google announced plans to open an artificial intelligence (AI) research lab in Beijing, marking the company’s most significant foray into China since its messy breakup with the Chinese government in 2010. That 2010 exit from the Chinese search market signaled the end—for the time being—of ethical debates about Google’s complicity in Chinese censorship. The new AI center opens fresh ethical questions, this time about engaging China on technologies that will both save lives and empower the world’s most sophisticated surveillance state.

It was in 2017 that AI took center stage in China’s technology aspirations, with the State Council issuing a sweeping blueprint to transform the country into the world’s AI leader. Billions of dollars poured into AI startups, Chinese teams beat out Silicon Valley leaders in prestigious AI competitions, and analysts began questioning whether the United States was losing ground in what may be the new “space race” of this century.

This was also the year in which the Chinese surveillance state raised the bar for Orwellian technology innovation. Facial recognition systems rolled out in cities across China, feeding a national database that strives to identify any one of China’s 1.4 billion citizens within three seconds. Collection of biometric data—voices, palm prints, DNA samples—ramped up in several regions. And data-driven surveillance in the western province of Xinjiang, home to China’s Uighur ethnic minority, turned the region into what some call the world’s first “21st century police state.”

Many, though not all, of these new surveillance technologies are powered by AI. Recent advances in AI have given computers superhuman pattern-recognition skills: the ability to spot correlations within oceans of digital data, and make predictions based on those correlations. It’s a highly versatile skill that can be put to use diagnosing diseases, driving cars, predicting consumer behavior, or recognizing the face of a dissident captured by a city’s omnipresent surveillance cameras. The Chinese government is going for all of the above, making AI core to its mission of upgrading the economy, broadening access to public goods, and maintaining political control.

Following the release of the State Council’s AI plan, the Chinese government has become one of the largest sources of research funding and even customer for AI-driven products: “City Brains” for optimizing transportation, AI doctors for diagnosing disease, and facial recognition systems for catching suspected criminals attending beer festivals. The country’s Ministry of Science and Technology has already picked four Chinese technology companies—Baidu, Alibaba, Tencent and iFlyTek—for the “national team,” tasking them with spearheading development of AI for different use cases.

Google China 2.0: AI Edition

Into this moral quandary has walked Google, perhaps the world’s top company when it comes to AI research and development, and no stranger to controversy in the China market. In announcing the lab, Dr. Fei-Fei Li, chief scientist for AI and machine learning at Google Cloud, rattled off a list of accomplishments by Chinese AI researchers: contributing to nearly half of the articles in the top 100 AI journals and making up most of the winning teams at the prestigious ImageNet object-recognition competition. The new center in Beijing joins a string of other Google AI labs in cities like New York, Toronto, and Zurich, and will focus on basic AI research. “The science of AI has no borders,” Li wrote, “neither do its benefits.”

Some analysts were worried about the absence of another kind of border. Shortly after the announcement, Sinocism China Newsletter author Bill Bishop asked, “How will Google ensure that none of its AI research finds application in China’s security apparatus?” This isn’t a question exclusive to Google—it’s one with which any American technology company conducting AI research in China will have to grapple.

Google will likely build in strong safeguards and mitigation measures to ensure that no products emerging from this lab are sold directly to the Chinese surveillance state—such a move would be a far too jarring violation of Google’s own principles, and would create a public relations firestorm if revealed.

The company doesn’t have to look far into the past to see how potential fiasco scenarios unravel. Back in 2004, Yahoo! executives were berated by members of Congress and sued in US courts after the company handed over information about a Chinese journalist’s email account to the police and landed him in jail. Cisco has faced a decade’s worth of lawsuits over marketing materials targeting the Chinese government that advertised its technology products as a way of “combating Falun Gong evil religion [sic].” Google is likely too savvy to get its hands dirty in that way.

The New “Dual-Use”

Deliberately preventing AI products from falling into the “wrong” hands is one thing. Protecting basic AI research is another, and far more slippery than data or products. Specific AI applications vary widely, but they are largely powered by advances in the construction of deep neural networks—the fundamental engines powering the super-human pattern recognition abilities. That means almost any advance in basic AI research can be thought of as creating “dual-use” technology. In other words, improvements in neural networks that let AlphaGo beat the world champion in Go, a Chinese strategy game, can also allow security cameras to more accurately conduct facial recognition on hundreds of individuals in a crowded train station.

And the same researchers who are at Google today can be at a company with a far different objective tomorrow. Two-way exchanges of people and ideas between Silicon Valley and China have reached unprecedented levels, and the trend is particularly strong in AI. Founders of China’s top facial recognition companies—SenseTime, Face++, DeepGlint and Yitu—all have research experience at top American AI labs, such as Microsoft Research Asia, MIT, and Google Research. Chinese “BAT” giants (Baidu, Alibaba, and Tencent) have all established or announced new AI-focused research facilities in the United States, some with the explicit goal of poaching talent from top American research labs. These flows go both ways, with Chinese AI talent filling the ranks of American tech juggernauts and featuring prominently in international research conferences.

There are of course limits to how much knowledge researchers can legally bring with them when moving between organizations. No one would blame Google if it were the subject of outright theft of trade secrets, as is claimed in the lawsuit by Google’s self-driving affiliate Waymo against Uber. In that case, a Waymo engineer is accused of conspiring with Uber to illegally download 14,000 files related to self-driving technology onto a hard drive and bringing them to Uber. But many AI experts say that you don’t need a hard drive full of files to share advances—once a researcher has tackled a fundamental problem in one research setting, the core of those solutions are easily transported and applied to new data sets.

Even without the movement of actual researchers between institutions, the open publication of much cutting-edge research in AI means that academic breakthroughs in Berkeley, California will almost certainly end up strengthening some aspect of surveillance technologies in Beijing. Google can do its best to build a “firewall” around research at its new China lab, but the reality is that all of these walls will be permeable.

AI Ethics and Savior Complexes

Where does that leave a company that claims “don’t be evil” as its guiding moral philosophy? Legally, and likely in the court of public opinion, it won’t be hard for Google to put distance between its in-house research and the sprawling security state. Even if Google did have misgivings about these knock-on effects of AI research, it’s unclear what they would do about that. It would be both illegal and highly unreasonable to treat all researchers of Chinese origin—both in the United States and China—as potential conduits of technology to the Chinese surveillance apparatus. The vast majority of Chinese researchers are just researchers, many sharing the same altruistic goals and ethical concerns as their American counterparts.

And these surveillance applications are just one half of the ethical ledger. If the operating assumption is that working with Chinese researchers will accelerate AI advances, the other side of the ledger could include saving hundreds of thousands of lives each year through faster deployment of self-driving cars and AI-driven discovery of new pharmaceutical drugs. (On the other, other side of that ledger are predictions by tech elites like Elon Musk that super-intelligent AI could cause the extinction of the human race.) Many of the lives saved by advances in AI will be invisible to us—it would be nearly impossible to know who would have died in a human-driven car accident had autonomous vehicles not been deployed. But the broader economic and social improvements are likely to be very real, and real-world effects must be part of the equation when rendering ethical judgments.

Debates about the role of US technology or media companies in China are often tinged with a savior complex—an assumption that it’s up to American companies to rescue Chinese citizens from a state of ignorance about their own country. These questions about the role of American AI research may be no different. Regardless of what happens at the Google AI China Center, the country’s own AI experts and security apparatus already have the tools needed to create a comprehensive surveillance state.

Like many of the social problems plaguing Chinese society, the fundamental solutions will have to come from within. And there are glimmers of hope on this front. For instance, in recent weeks Chinese citizens have begun speaking out about valuing data and privacy. It’s not much, but it’s a start.


Get Our Stuff

SHARE THIS ARTICLE