Skip to main content
Version: 1.40

Evaluate

This section explains how to evaluate predictions of a Deployment.

Prediction Evaluation

Evaluating a prediction means overwriting, or correcting, the model's output. This can be used as a reference, or to be used as training data for a next iteration of the model.

Predictions are evaluated using a prediction log's ID. These can be retrieved using the prediction log methods.

from deeploy import CreateEvaluation

deployment_id = "example"
prediction_log_id = "example"

# Example of a disagreement during the evaluation

evaluation_input_disagree: CreateEvaluation = {
"agree": False,
"desired_output": { "predictions": [True] },
"comment": "Example evaluation from the Python Client",
}

evaluation_disagree = client.evaluate(deployment_id, prediction_log_id, evaluation_input_disagree)

# Example of an agreement during the evaluation

evaluation_input_agree: CreateEvaluation = {
"agree": True,
"comment": "Example evaluation from the Python Client",
}

evaluation_agree = client.evaluate(deployment_id, prediction_log_id, evaluation_input_agree)