Evaluate a model#

The Evaluate a model task measures the quality of a machine learning model that has already been trained using the data of the currently displayed sheet. The quality of a model is described by metrics (e.g. accuracy).

The reported metrics depend on the way the model was trained. For example, if the label of the model is a categorical column, the model is trained to do classification, and the reported metrics will include metrics such as accuracy, confusion tables, logloss, auc, pr-auc, and others. If the label of the model is a numerical column, the model is trained to do regression, and the reported metrics will include such as RMSE.

Note that the quality of a model is also available in the Quality tab of the Understand a model task. However, the model quality in the Understand a model is computed during training.

See the YDF Evaluation metrics page for a detailed explanation of each reported metric.

Use this task as follows:

  1. Open a sheet with the test examples. The sheet should be in the tabular format.

  2. Select the “Evaluate a model” task.

  3. Select a previously trained model.

  4. Click “Evaluate.”

    After a few seconds, the evaluation window opens.