Predictions
With a functioning Deployment, you can test a Deployment on the Interact tab in a Deployment, or make predictions using the Deployment API or the Python client. In this section, you will find guides to make the most out of predictions.
📄️ Performance evaluation with Actuals
Comparing model outcomes with actual values allows data scientists to evaluate the performance of their models. By comparing predicted results against ground truth or real-world observations, they can assess how accurately the model is predicting or classifying data. This evaluation helps determine the model's effectiveness and identify areas for improvement.
📄️ Feedback loop with evaluations
Deeploy is built on the fundamental belief that providing clear explanations of deployed models' prediction processes is crucial for both present understanding and future reference. As such, each deployment within Deeploy is equipped with an endpoint that enables the collection of feedback from experts or end-users. This feedback is solicited and recorded on a per-prediction basis, allowing for comprehensive evaluation.
📄️ Custom IDs for predictions
When the generated prediction ID is not sufficient for the needs of your specific use case, you can define an custom ID. The custom ID attached to the request is used to handle and filter all the predictions in a better way.