AI in the Cloud
Servers are key demand drivers of AI chips because most training of AI algorithms occurs in the cloud. According to Forrester Research and International Data Corporation, the global cloud market increased by more than 30-fold between 2008 and 2018. China’s Huawei has forecast that, by 2025, 97% of companies will be using cloud AI.
But “cloud computing” isn’t ethereal—it’s grounded in massive data centers powered by sprawling server farms. Here’s a close-up of the server market and why it’s important for AI chips.
What is a Server?
A server is simply a device that hosts and processes computer programs. When networked together in a data center, servers have the scale and power to handle an enormous load of computing tasks, far beyond anything that a single machine could handle.
Relative to edge devices, servers are less constrained by physical size and energy demands, so they exceed edge AI in terms of cost and computing power. What’s more, cloud computing allows multiple chips to operate in parallel, further widening their computational advantage over edge devices.
The enormous potential of cloud computing was on display when Google’s Alpha Go program defeated the leading human Go player in 2016. Alpha Go was the result of AI training in the cloud, powered by Google’s own tensor processing unit (TPU) chip (see more details on the TPU below).
Of course, cloud computing supports AI applications far beyond playing board games. One example is in the development of new drugs. Clinical trials can take more than a decade to complete, but cloud AI can significantly reduce that time when applied to the initial drug screening. In short, cloud AI can reduce costs and increase efficiency for repetitive tasks that require processing a lot of data.
Cloud computing has become even more appealing with the advent of 5G technology. If 5G wireless networks live up to their potential, they will provide rapid data transmission with low latency, enabling more AI computations to take place in the cloud. This innovation is important for AI applications like autonomous driving, where low latency can mean the difference between life and death.
Global Cloud Computing Market Size
Source: Forrester Research; Forbes; Gartner; Tata Communications.
Google’s TPU chip
Based on McKinsey’s forecast, about two-thirds of the increase in AI hardware demand will be for servers at data centers. One example of such an AI chip is Google’s TPU. Like all AI chips for cloud computing, Google’s TPU is also an ASIC, which means it is custom designed. Version 3 of the company’s Cloud TPU contains up to 1,024 component chips, and each is 100 times more powerful than its Edge TPU for end-use devices.
The TPU may be powerful, but what may ultimately matter more is the chip’s underlying TensorFlow architecture, the core intellectual property that Google developed. Google has made the TensorFlow architecture and its ML libraries open source in order to incentivize adoption of this platform ecosystem and gain market share. Google deployed a similar strategy with its Android mobile operating system, which is now the world’s largest by market share.