Operationalize ML models built in Amazon SageMaker Canvas to production using the Amazon SageMaker Model Registry

You can now register machine learning (ML) models built into Amazon SageMaker Canvas with a single click in Amazon SageMaker model registration, allowing you to operationalize ML models in production. Canvas is a visual interface that allows business analysts to generate accurate ML predictions themselves, without requiring any ML experience or having to write a single line of code. While it’s a great place for development and experimentation, to get value from these models, they need to be put to work, that is, deployed in a production environment where they can be used to make predictions or decisions Now, with the integration with Model Registry, you can store all model artifacts, including metadata and performance metric baselines, in a central repository and connect them to your deployment CI/CD processes existing models.

The model registry is a repository that catalogs ML models, manages multiple model versions, associates metadata (such as training metrics) with a model, manages a model’s approval status, and deploys them to production. After building a version of the model, you typically want to evaluate its performance before deploying it to a production endpoint. If it meets your requirements, you can update the approval status of the model version to approved. If you set the status to approved, CI/CD deployment for the model can begin. If the model version does not meet your requirements, you can update the approved to rejected status in the registry, which prevents the model from being deployed in a scaled environment.

A model registry plays a key role in the model deployment process because it packages all model information and enables automation of model promotion to production environments. Here are some ways a model registry can help put ML models to work:

  • Version control – A model registry allows you to keep track of different versions of your ML models, which is essential when deploying models to production. By tracking model versions, you can easily revert to an earlier version if a new version causes problems.
  • collaboration – A model registry enables collaboration between data scientists, engineers, and other stakeholders by providing a centralized location to store, share, and access models. This can help streamline the deployment process and ensure that everyone is working with the same model.
  • governance – A model registry can help with compliance and governance by providing an auditable history of model changes and deployments.

Overall, a model registry can help streamline the process of deploying ML models to production by providing version control, collaboration, oversight, and governance.

Solution overview

For our use case, we are taking on the role of a business user in the marketing department of a mobile operator and have successfully built an ML model in Canvas to identify customers at potential churn risk. Thanks to the predictions generated by our model, we now want to move this from our development environment to production. However, before our model is deployed to a production endpoint, it must be reviewed and approved by a central MLOps team. This team is responsible for managing model versions, reviewing all metadata associated (such as training metrics) with a model, managing the approval status of each ML model, deploying approved models to production, and automating the deployment of the model with CI/CD. To streamline the process of deploying our model to production, we take advantage of Canvas’ integration with model registration and register our model for review by our MLOps team.

The workflow steps are as follows:

  1. Upload a new dataset with the current customer population to Canvas. For the full list of supported data sources, see Import data into Canvas.
  2. Build ML models and analyze their performance metrics. See instructions for building a custom ML model in Canvas and evaluate the model’s performance.
  3. Register the best performing versions in the model registry for review and approval.
  4. Deploy the approved model version to a production endpoint for real-time inference.

You can perform steps 1-3 in Canvas without writing a single line of code.


For this tutorial, make sure the following prerequisites are met:

  1. To register model versions in the model registry, the Canvas administrator must give the Canvas user the necessary permissions, which you can manage in the SageMaker domain that hosts your Canvas application. For more information, see the Amazon SageMaker Developer Guide. When you grant your Canvas user permissions, you must choose whether to allow the user to check-in their model versions to the same AWS account.

  1. Implement the prerequisites mentioned in Predict customer churn with no-code machine learning using Amazon SageMaker Canvas.

You should now have three model versions trained on historical rotation prediction data in Canvas:

  • V1 trained with all 21 features and a fast build configuration with a model score of 96.903%
  • V2 trained with all 19 features (removed phone and status features) and fast build configuration and improved accuracy of 97.403%
  • V3 trained with a standard build configuration with a model score of 97.03%.

Use the Customer Turnover Prediction Model

active Show advanced metrics and review the objective metrics associated with each version of the model so that we can select the best performing model for model registration.

Based on the performance metrics, we select version 2 to register.

The model log keeps track of all model versions that you train to solve a particular problem in a group of models. When you train a Canvas model and register it in the model registry, it is added to a model group as a new version of the model.

At the time of registration, a model group is automatically created within the model registry. You can optionally change the name to whatever name you want or use an existing model group in the model registry.

For this example, we use the auto-generated model group name and choose add.

Our version of the model should be registered in the model group of the model registry. If we were to register another model version, it would be registered in the same model group.

The model version status should have changed from Not registered a recorded.

When we hover over the status, we can review model registration details, including model group name, model registration account ID, and approval status. Right after registration, the status changes to Pending approvalmeaning that this model is registered in the model registry, but is pending review and approval by a data scientist or MLOps team member and can only be deployed to an endpoint if approved.

Now let’s go to Amazon SageMaker Studio and assume the role of a member of the MLOps team. Under models in the navigation pane, choose Registration model to open the model registration home page.

We can see the grou modelp canvas-Churn-Prediction-Model which Canvas has automatically created for us.

Choose the model to review all versions registered in that model group, and then review the corresponding model details.

If you open the details of version 1, we can see that the Activity tab keeps track of all the events that happen in the model.

At the Quality of the model tab, we can review model metrics, precision/recall curves, and confusion matrix plots to understand model performance.

At the Explainability tab, we can review the features that most influenced the model’s performance.

After we have reviewed the model artifacts, we can change the approval status of pendants a approved.

Now we can see the updated activity.

The Canvas business user will now be able to see that the status of the registered model has changed Pending approval a approved.

As a member of the MLOps team, since we have approved this ML model, let’s implement it in an endpoint.

In Studio, navigate to the model registration home page and choose canvas-Churn-Prediction-Model model group Choose the version you want to deploy and go to Configuration tab

Navigate to model package ARN details for the model version selected in the model record.

Open a notepad in Studio and run the following code to deploy the model to an endpoint. Replace model package ARN with your own model package ARN.

from sagemaker import ModelPackage
from time import gmtime, strftime
import boto3
import os
import pandas as pd
import sagemaker
import time
from datetime import datetime
from sagemaker import ModelPackage

boto_session = boto3.session.Session()
sagemaker_client = boto_session.client("sagemaker")
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_client
model_package_arn = 'arn:aws:sagemaker:us-west-2:1234567890:model-package/canvas-churn-prediction-model/3'
model = ModelPackage(role=role,
model.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")

After you create the endpoint, you can see it as an event on the page Activity model registration tab.

You can double-click the endpoint name to get its details.

Now that we have an endpoint, let’s invoke it for real-time inference. Replace the name of your endpoint in the following code snippet:

import boto3, sys

sm_rt = boto3.Session().client('runtime.sagemaker')

payload = [163, 806, 'no', 'yes', 300, 8.162204, 3, 3.933, 2.245779, 4, 6.50863, 6.05194, 5, 4.948816088, 1.784764, 2, 5.135322, 8]
l = ",".join([str(p) for p in payload])

response = sm_rt.invoke_endpoint(EndpointName=endpoint_name, ContentType="text/csv", Accept="text/csv", Body=l)

response = response['Body'].read().decode("utf-8")

Clean up

To avoid incurring future charges, please delete the resources you created while following this post. This includes logging out of Canvas and deleting the deployed SageMaker endpoint. Canvas bills you for the duration of your session, and we recommend that you log out of Canvas when you’re not using it. See Logging Out of Amazon SageMaker Canvas for more information.


In this post, we discussed how Canvas can help operationalize ML models in production environments without requiring ML expertise. In our example, we showed how an analyst can quickly build a highly accurate predictive ML model without writing any code and register it in the model registry. The MLOps team can either review it and reject the model or approve it and start the downstream CI/CD deployment process.

To start your low-code/no-code ML journey, check out Amazon SageMaker Canvas.

Special thanks to everyone who contributed to the release:

Back end:

  • Huayuan (Alice) Wu
  • Krittaphat Pugdeethosapol
  • Yanda Hu
  • Joan He
  • Esha Dutta
  • Prashant


About the Authors

Janisha Anand is a Senior Product Manager on the SageMaker Low/No Code ML team, which includes SageMaker Autopilot. She enjoys coffee, staying active and spending time with her family.

Krittaphat Pugdeethosapol is a software development engineer at Amazon SageMaker and works primarily with low-code and no-code SageMaker products.

Huayuan (Alice) Wu is a software development engineer at Amazon SageMaker. Focuses on building ML tools and products for customers. Outside of work, she enjoys the outdoors, yoga, and hiking.

Source link
At Ikaroa, we understand the importance of operationalizing ML models for efficient production. With Amazon SageMaker Model Registry, we can build powerful machine learning models in Amazon SageMaker Canvas and seamlessly deploy them into production. The tool provides collaborative model management, that makes it easy to store, version and share your machine learning models end to end.

The Amazon SageMaker Model Registry enables teams to easily track, manage, and govern models by applying the same versioning, tagging, and labeling applied to other DevOps and ML assets. This makes it easier to bring Machine Learning models into the same standard environment and process that IT operations, DevOps, and software engineering teams actually use. By using the Model Registry, organizations can navigate the entire machine learning lifecycle, from experimentation to production.

The Amazon SageMaker Model Registry also provides faster and easier deployment of models into business systems. It makes it easier to experiment with optimization, process experimentation results and track development steps. With the Model Registry in Amazon SageMaker Canvas, teams can now apply automated tests, store models, and track their performance. This is especially useful for organizations that need to adhere to compliance requirements, as it ensures their models are auditable.

At Ikaroa, we understand the need for a streamlined model management process. With Amazon SageMaker Model Registry, teams can now easily deploy powerful models from Amazon SageMaker Canvas into production. The Model Registry makes it easier to keep track of a model’s version, experiment with optimization, and store models systematically – all within the same flexible framework. Together, Amazon SageMaker Canvas and the Model Registry make ML delivery more efficient, reliable and effective.


Leave a Reply

Your email address will not be published. Required fields are marked *