Skip to main content
Version: 1.42

Create a Deployment

A Deployment represents an instance of a machine learning model. Every Deployment has an owner, who assumes responsibility for its management. Initially, the user creating the deployment automatically becomes its owner. Creating Deployments is a straightforward process that can be completed in a few simple steps, using the Deployment API, Python Client, or UI. Some steps are unique to specific Deployment types.

tip

For large Repositories, we suggest creating Deployments using the Python Client to speed up the process.

Prerequisites

Repository

You can either Link a Repository, connect an already linked Repository, or omit the connection for external and registration Deployments. When selecting connect to a linked Repository, Deeploy will display the available options. Choose a Repository, branch, and commit. If your model, explainer, and/or transformer folders are not located in the root directory of the selected commit, disable Use root folder and select the folder that contains these components.

Deployment

Name your Deployment. Optionally, add a short description. Define the risk classification of your Deployment based on the EU AI Act. If you have connected a Repository and added metadata using the metadata.json file in the Repository, you can retrieve and review the metadata.

When creating a standard Deployment, you have the option to switch the deployment service. Disable Use default Deployment service and choose your preferred service. Workspace owners can change the default Deployment service on the Workspace Integrations page.

Model (standard Deployments)

Select the model framework that you have used to train the model. Supported frameworks, versions, and examples can be found in Supported Framework Versions for KServe. Alternatively, deploy a Custom Docker image.

For more information on the advanced model options, see Advanced Deployment options.

Explainer (standard Deployments)

Select the explainer framework that you have used to train the explainer. For detailed information see Deploying an explainer. Supported frameworks, versions, and examples can be found in Supported framework versions for KServe.

For more information on the advanced explainer options, see Advanced Deployment options.

Transformer (standard Deployments)

Select the transformer framework that you have used to train the transformer. For detailed information see Deploying a transformer. Supported frameworks, versions, and examples can be found in Supported framework versions for KServe.

For more information on the advanced transformer options, see Advanced Deployment options.

Connection (external Deployments)

Provide the URL where your external model can be accessed. Select your preferred authentication method and enter your credentials. Optionally, you can perform a connection check to ensure Deeploy can access the model before proceeding.

Compliance

Fill in the compliance insights to comply to responsible AI standards. Filling in the compliance insights is completely optional, but advisable when dealing with high risk applications. Checklist templates cannot be fully assessed yet, and are therefore read-only.

Deploy

Click Deploy or Register, Deeploy will now initiate the automated deployment process. You will be directed to the Events page.