Skip to main content
Version: 1.37

Creating Azure Machine Learning Deployments

With the Azure Machine Learning integration set up, you can create Azure Machine Learning Deployments. Typically, creating such a Deployment involves the steps outlined in Creating a Deployment. However, it's worth noting that certain settings are exclusive to KServe Deployments and not applicable to Azure Machine Learning Deployments, and vice versa. In this article, we will highlight only the parts unique to Azure Machine Learning Deployments.

Prerequisities

  • You added a Repository that adheres to the requirements. Note that Repositories used for Azure Machine Learning Deployments must use the reference system.
  • (Optional) add a model to the model registry in your Azure Machine Learning workspace
  • Include an AzureML reference, as illustrated in this example:
{
"reference": {
"azureML": {
"image": "0c2df065d0f143c98e4b91f98cb91f10.azurecr.io/azure-machine-learning-fraud-detection-explainer:0.1.0",
"uri": "/v1/models/fraud-detection:predict",
"port": 8080,
"readinessPath": "/v1/ready",
"livenessPath": "/v1/health",
"model": "fraud_detection",
"version": "1"
}
}
}

The image, uri, port, readinessPath and livenessPath are mandatory, model and version are optional and refer to the name and version of a model in your Azure Machine Learning workspace's model registry. If model and version are specified, then the model file(s) will be available inside your model or explainer container at path MODEL_BASE_PATH (retrieve as environment variable in your code). The model name is available as MODEL_NAME environment variable. Your model and explainer images need to have a readinessPath and livenessPath api route that should return a 200 status code whenever the container is up and running. Check out this repository for an example implementation.

Repository, branch and commit

Select a Repository, branch, and commit as usual.

Deployment metadata

Specify name and retrieve metadata as usual.

Model framework

Only the Custom Docker model framework is allowed. Therefore, you cannot select a model framework in this step.

Click Advanced configuration to select an instance type and instance count, which gives you the flexibility to choose the appropriate mix of resources for your model. See this guide for choosing the instance type based on your use case.

Explainer Framework

Only the Custom Docker explainer framework is allowed. Therefore, you can only select No explainer or Custom docker in this step. For Advanced configuration the instance type and count can be selected.

Transformer

Transformers have been disabled for Azure Machine Learning deployments, please include additional pre and post-processing in the model or explainer docker image.

Compliance

Compliance is as usual, we do assume the docker image to be on a private image registry for Azure Machine Learning deployments.

Deploy

Click Deploy, Deeploy will now initiate the automated deployment process.

Created Resources

Deeploy will create three Azure Machine Learning resources for your model and three for your explainer.

These include an Environment, Online endpoint and a Deployment. You can find them in your Azure portal once the deployment has succeeded.