This function takes the correct labels and predictions for all samples and evaluates the results using the

  • Area Under the Receiver Operating Characteristic (ROC) Curve (AU-ROC)

  • and the Precision-recall Curve (PR)

as metric. Predictions can be supplied either for a single case or as matrix after resampling of the dataset.

Prediction results are usually produced with the function plm.predictor.

eval.result(label, pred)

Arguments

label

label object

pred

prediction for each sample by the model, should be a matrix with dimensions length(label) x 1 or length(label) x num.resample

Value

list containing

  • $roc.average average ROC-curve across repeats or a single ROC-curve on complete dataset;

  • $auc.average AUC value for the average ROC-curve;

  • $ev.list list of length(num.folds), containing for different decision thresholds the number of false positives, false negatives, true negatives, and true positives;

  • $pr.list list of length(num.folds), containing the positive predictive value (precision) and true positive rate (recall) values used to plot the PR curves;

. If prediction had more than one column, i.e. if the models has been trained with several repeats, the function will additonally return

  • $roc.all list of roc objects (see roc) for every repeat;

  • $aucspr vector of AUC values for the PR curves for every repeat;

  • $auc.all vector of AUC values for the ROC curves for every repeat