Performance evaluation with Actuals
Evaluate the performance of a Deployment by comparing predicted outcomes to what we call actuals; ground truth or real-world observations. This may help you determine a Deployment’s effectiveness and identify areas for improvement.
Collect actuals
We assume you have already created a Deployment and made predictions. Use the Deeploy API or Python client to add actuals.
- Deeploy API
- Python client
Consult the actuals section within the Swagger documentation.
To comply with KServe Data Plane Formatting, the actual format must match the following structure:
{ 'outputs': [<value>] }
or
{ 'predictions': [<value>] }
Monitor actuals
Monitor actuals for a Deployment on the Performance tab on your Deployment’s Monitoring page. Specifically, you can monitor:
- Accuracy: for classification models only
- Root mean squared error (RMSE): for regression models only
Read monitoring a Deployment for more details.
View actuals
View the actual for a specific prediction by clicking on a prediction on the Predictions page within a Deployment. Scroll down to view:
- the token used to supply the actual
- the observed value