Compute classification performance according to common evaluation metrics: classification error, AUC and log-loss.

classLoss(actual, predicted, prob, eval.metric = "overall_error")

Arguments

actual
factor array with the true class labels.
predicted
factor array with the predicted class labels.
prob
matrix with predicted class membership probabilities. Rows are observations and columns are classes. It is required to calculate AUC and log-loss.
eval.metric
evaluation metric to be used. It can be one of c("overall_error", "mean_error", "auc", "logloss"). The default option is "overall_error".

Value

The classification performance measure.

Details

There are four evaluation metrics available sor far:

  • eval.metric="overal_error": default option. It gives the overall misclassification rate. It do not require the prob parameter.
  • eval.metric="mean_error": gives the mean per class misclassification rate. It do not require the prob parameter.
  • eval.metric="auc": gives the mean per class area under the ROC curve. It requires the prob parameter.
  • eval.metric="logloss": gives the cross-entropy or logarithmic loss. It requires the prob parameter.

Examples

## Not run: ------------------------------------ # library("mlbench") # library("caTools") # library("fastknn") # # data("Ionosphere") # # x <- data.matrix(subset(Ionosphere, select = -Class)) # y <- Ionosphere$Class # # set.seed(2048) # tr.idx <- which(sample.split(Y = y, SplitRatio = 0.7)) # x.tr <- x[tr.idx,] # x.te <- x[-tr.idx,] # y.tr <- y[tr.idx] # y.te <- y[-tr.idx] # # knn.out <- fastknn(xtr = x.tr, ytr = y.tr, xte = x.te, k = 10) # # classLoss(actual = y.te, predicted = knn.out$class, eval.metric = "overall_error") # classLoss(actual = y.te, predicted = knn.out$class, prob = knn.out$prob, eval.metric = "logloss") ## ---------------------------------------------