Amazon Web Services, in collaboration with Facebook, have announced two new projects for the renowned Deep Learning library, PyTorch.
TorchServe, the first of the two projects, is essentially a model serving framework for PyTorch. In a News Blog, AWS has said that this addition will help users, “bring their models to production quicker, without having to write custom code.” TorchServe is all set to provide a low latency prediction API for their users’ models and will come equipped with default handlers for the typical deep learning applications there are out there, such as text classification and object detection.
TorchServe will also include, “multi-model serving, model versioning for A/B testing, monitoring metrics, and RESTful endpoints for application integration,” as mentioned in the blog by AWS.
The new framework will come with support for both Python and TorchScript models. it will give the users the ability to run multiple versions of the same model simultaneously and will keep track of the model versions in an archive. Users can use that archive to revert to older version of their models.
The second update coming to PyTorch is TorchElastic. TorchElastic is a library specifically designed to aid in the training of Kubernetes clusters in PyTorch. It can be used by AI practitioners to scale cloud training resources according to their needs.
The TorchServe framework, complete with TorchElastic, will support all machine learning environments, “including Amazon SageMaker, container services, and Amazon Elastic Compute Cloud (EC2).”