Creating MLFlow Deployments
With the MLFlow integration set up, you can create Deployments that typically involves the steps outlined in Creating a Deployment/. For SageMaker deployments, using MLFlow is not supported. For Azure Machine Learning deployments review the Azure Machine Learning documentation on how to use the model registry.
Prerequisites
- You added a Repository that adheres to the requirements. Note that Repositories used for MLFlow Deployments must use the reference system.
- (Optional) include Blob details, as illustrated in this example:
{
"reference": {
"mlflow": {
"model": "my-model",
"alias": "champion",
"blob": {
"region": "eu-central-1"
}
}
}
}
Make sure to select model and explainer private object storage credentials during deployment that have access to the MLFlow model registry storage location.
- Either
alias
,stage
orversion
can be present (not both), where version is a number in the form of a string, e.g. "5". Note that staging is a deprecated MLFlow feature, we advice you to use eitheralias
orversion
.
Explainer
We recommend to deploy an explainer using MLFlow, however also provide the option to for instance use a custom docker image as explainer. When using MLFlow, we require the model
and alias
, stage
or version
to be the same as configured in your model's reference.json
. We will look for an explainer.dill
file in the explainer
artifact folder of your model's run that corresponds to the version
or stage
specified.
Updating Model
If you have configured your model using alias
or stage
and transitioned a new model to that alias or stage, you can update your model and explainer without needing to reconfigure your Deeploy settings. You can either update via the Deeploy application (press update in the details page) or call Deeploy's api to patch a deployment with an empty request body. See our swagger documentation for more information.
Example
For a full example, please check out this repository, which deploys a Scikit-learn model and SHAP explainer to Deeploy using MLFlow.
PyFunc models
Store models the MLFlow model registry as a PyFunc flavor. With PyFunc models, custom Python logic (e.g. preprocessing and postprocessing), can easily be included in the model binary. PyFunc models can leverage custom PyPi packages, of which you need to include a reference in the model's artifacts.
Serve your PyFunc model in Deeploy with custom Docker containers. This end to end PyFunc example shows how to easily wrap PyFunc models as custom Docker containers using the mlflow
CLI.
Make sure that your Pyfunc model adheres to the v1 or v2 input format in order to fully use Deeploy's monitoring capabilities.
The Docker container starts an MLFlow server that accepts 4 input types, use instances
or inputs
to be compatible.
Currently it is not possible to use a transformer with a PyFunc model due to incompatible model endpoint. For explainers only custom Docker is supported.