Compute classification performance according to common evaluation metrics: classification error, AUC and log-loss.
classLoss(actual, predicted, prob, eval.metric = "overall_error")
c("overall_error", "mean_error", "auc", "logloss")
. The default
option is "overall_error"
.The classification performance measure.
There are four evaluation metrics available sor far:
eval.metric="overal_error"
: default option. It gives the
overall misclassification rate. It do not require the prob
parameter.
eval.metric="mean_error"
: gives the mean per class
misclassification rate. It do not require the prob
parameter.
eval.metric="auc"
: gives the mean per class area under the ROC
curve. It requires the prob
parameter.
eval.metric="logloss"
: gives the cross-entropy or logarithmic
loss. It requires the prob
parameter.
## Not run: ------------------------------------ # library("mlbench") # library("caTools") # library("fastknn") # # data("Ionosphere") # # x <- data.matrix(subset(Ionosphere, select = -Class)) # y <- Ionosphere$Class # # set.seed(2048) # tr.idx <- which(sample.split(Y = y, SplitRatio = 0.7)) # x.tr <- x[tr.idx,] # x.te <- x[-tr.idx,] # y.tr <- y[tr.idx] # y.te <- y[-tr.idx] # # knn.out <- fastknn(xtr = x.tr, ytr = y.tr, xte = x.te, k = 10) # # classLoss(actual = y.te, predicted = knn.out$class, eval.metric = "overall_error") # classLoss(actual = y.te, predicted = knn.out$class, prob = knn.out$prob, eval.metric = "logloss") ## ---------------------------------------------