The march of specialized chips for artificial intelligence continues unabated, and reports from some luminaries of the semiconductor industry point to a broadening out of the movement of machine learning parts.

The well-regarded chip-industry newsletter Microprocessor Report this week reports[1] that cloud computing operators such as Amazon and some electronics giants such as Huawei show impressive results against the CPUs and the graphics processing units, or GPU, parts that tend to dominate AI in the cloud. (Microprocessor Report articles are only available via subscription to the newsletter.)

And a think-piece this month in the Communications of the ACM[2], from two legends of chip design, John L. Hennessy and David A. Patterson, explains that circuits for machine learning represent something of a revolution in chip design broadly speaking. Hennessy and Patterson last year received the prestigious A.M. Turing award from the ACM for their decades of work on chip architecture design.

Also: Google says 'exponential' growth of AI is changing nature of compute[3]

In the Microprocessor Report editorial, the newsletter's principal analyst, Linley Gwennap, describes the rise of custom application-specific integrated circuits for cloud with the phrase "when it rains, it pours." Among the rush of chips are Amazon's "Graviton" chip, which is now available in Amazon's AWS cloud service. Another is the "Kunpeng 920" from Chinese telecom and networking giant Huawei. Huawei intends to use the chips in both its line of server computers and as an offering in its own cloud computing service.

patterson-google-tpu-block-diagram-jan-2019.png
A block diagram of Google's "TPU" processor for machine learning. Google.

Both Amazon and Huawei intend to follow up with more parts: a "deep-learning accelerator" from Amazon called "Inferentia" and a part for neural network

Read more from our friends at ZDNet