Amazon Web Services(AWS),announced six new Amazon SageMaker capabilities, including Amazon SageMaker Studio, the first fully integrated development environment for machine learning, that makes it easier for developers to build, debug, train, deploy, monitor, and operate custom machine learning models.
These announcements give developers powerful new tools like elastic notebooks, experiment management, automatic model creation, debugging and profiling, and model drift detection, and wraps them in the first fully integrated development environment (IDE) for machine learning, Amazon SageMaker Studio.
Amazon SageMaker is a fully managed service that removes the heavy lifting from each step of the machine learning process. Since launch, AWS has regularly added new capabilities to Amazon SageMaker, with more than 50 new capabilities delivered in the last year alone, including Amazon SageMaker Ground Truth, SageMaker and SageMaker Neo which gives developers the ability to train an algorithm once and deploy on any hardware.
The announcements include significant capabilities that make it much easier for customers to build, train, explain, inspect, monitor, debug, and run custom machine learning models:
- Machine learning IDE: Amazon SageMaker Studio pulls together all of the components used for machine learning in a single place giving developers the ability to create project folders, organize notebooks and datasets, and discuss notebooks and results collaboratively.
- Elastic notebooks: Amazon SageMaker Notebooks provides one-click Jupyter notebooks with elastic compute that can be spun up in seconds. Notebooks contain everything needed to run or recreate a machine learning workflow. Notebooks will also enable one click sharing of notebooks by automatically reproducing the specific environment and library dependencies. This will make it easier to build models collaboratively.
- Experiment management: Amazon SageMaker Experimentshelps developers organize and track iterations to machine learning models. This helps developers manage these iterations by automatically capturing the input parameters, configuration, and results, and stores them as ‘experiments’. This includes preserving the full lineage of the experiments, so if a model begins to deviate from its intended outcome, developers can go back in time and inspect its artifacts.
- Debugging and profiling: Amazon SageMaker Debuggerallows developers to debug and profile model training to improve accuracy, reduce training times, and facilitate a greater understanding of machine learning models. Amazon SageMaker Debugger, models trained in Amazon SageMaker automatically emit key metrics that are collected and can be reviewed in Amazon SageMaker Studio or via Amazon SageMaker Debugger’s
- Automatic Model Building: Amazon SageMaker Autopilot provides the industry’s first automated machine learning capability that does not require developers to give up control and visibility into their models. Autopilot can be used by people who lack experience with machine learning to easily produce a model based on data alone, or it can be used by experienced developers to quickly develop a baseline model on which teams can further iterate. This also gives developers a range of up to 50 different models that can be inspected in Amazon SageMaker Studio.
- Concept drift detection:Amazon SageMaker Model Monitor allows developers to detect and remediate concept drift. Amazon SageMaker Model Monitor automatically detects concept drift in deployed models. Model Monitor creates a set of baseline statistics about a model during training and compares the data used to make predictions against the training baseline. Model monitor alerts developers when drift is detected and helps them visually identify the root cause. Developers can use this of the box features to detect drift right away and makes it easier for developers to adjust the training data or algorithm to accommodate concept drift.
Swami Sivasubramanian, Vice President, Amazon Machine Learning, AWS said “today, we are announcing a set of tools that make it much easier for developers to build, train, explain, inspect, monitor, debug, and run custom machine learning models. Many of these concepts have been known and used by software developers to build, test, and maintain software for many years; however, they were not available for developers to build machine learning models. Today, with these launches, we are bringing these concepts to machine learning developers for the very first time.”
(Image courtesy: www.towardsdatascience.com)