SynapseAI 1.5.0 Release

The Habana(R) Labs team is happy to announce the release of SynapseAI® version 1.5.0.

SynapseAI 1.5 brings many improvements, both in usability and in Habana ecosystem support. For PyTorch we removed the need for weight permutation, as well as the need to explicitly call load_habana_module. See Porting a Simple PyTorch Model to Gaudi for more information. Habana accelerator support has been upstreamed to PyTorch lightning v1.6 and integrated with grid.ai. To better support large-scale models, we added support for DeepSpeed on Gaudi.

For the first time, we are releasing saved checkpoints for pre-trained models. In 1.5 we are releasing TensorFlow ResNet-50 and BERT Large. Checkpoints can be found on the Habana Catalog page.

We added more PyTorch APIs, including memory stats APIs as well as Random Number Generator APIs. For more information, see the HPU APIs page.

In this release, we have made several version updates. We now support PyTorch 1.11.0 (previously 1.10.2), PyTorch Lightning 1.6.4 (previously 1.5.10), and TensorFlow 2.9.1. In addition, we have announced Long Term Support (LTS) for TensorFlow 2.8.x.

Several new reference models have been enabled with the 1.5.0 release. These include PyTorch implementations for YOLOv5, DINO, and DeepSpeed BERT-L and BERT-1.5B scaling up to 128 accelerators. In addition, more reference models are already validated on Gaudi2, which was launched in May. These include TensorFlow ResNet-50 on 8 accelerators, BERT-L (FT, PT) on 8 accelerators, ResNext-101 on 1 and 8 accelerators and MaskRCNN on 1 and 8 accelerators. In addition, the release enables PyTorch ResNet-50 on 8 accelerators, BERT-L (FT, PT) on 8 accelerators and ResNext-101 on 1 and 8 accelerators.

You can find more information on Habana’s release notes page.

1 Like