Microsoft has released DeepSpeed, a new deep learning optimization library for PyTorch, that is designed to reduce memory use and train models with better parallelism on existing hardware.

According to a Microsoft Research blog post announcing the new framework, DeepSpeed improves PyTorch model training through a memory optimization technology that increases the number of possible parameters a model can be trained with, makes better use of the memory local to the GPU, and requires only minimal changes to an existing PyTorch application to be useful.

To read this article in full, please click here

Read more from our friends at InfoWorld