In recent years, artificial intelligence programs have been prompting change in the design of computer chips[1], and novel computers have likewise made possible new kinds of neural networks in AI[2]. There is a feedback loop going on that is powerful.

At the center of that sits the software technology that converts neural net programs to run on novel hardware. And at the center of that sits a recent open-source project gaining momentum.

Apache TVM[3] is a compiler that operates differently from other compilers. Instead of turning a program into typical chip instructions for a CPU or GPU, it studies the "graph" of compute operations in a neural net, in TensorFlow or Pytorch form, such as convolutions and other transformations, and figures out how best to map those operations to hardware based on dependencies between the operations. 

At the heart of that operation sits a two-year-old startup, OctoML[4], which offers ApacheTVM as a service. As explored in March[5] by ZDNet's George Anadiotis, OctoML is in the field of MLOps, helping to operationalize AI. The company uses TVM to help companies optimize their neural nets for a wide variety of hardware. 

Also: OctoML scores $28M to go to market with open source Apache TVM, a de facto standard for MLOps[6]

In the latest development in the hardware and research feedback loop, TVM's process of optimization may already be shaping aspects of how AI is developed.

"Already in research, people are running model candidates  through our platform, looking at the performance," said OctoML co-founder Luis Ceze, who serves as CEO, in an interview with ZDNet via Zoom. The detailed performance metrics mean that ML developers can "actually

Read more from our friends at ZDNet