Unified (formula-based) interface version of the learning vector quantization
algorithms provided by class::olvq1()
, class::lvq1()
, class::lvq2()
,
and class::lvq3()
.
mlLvq(train, ...)
ml_lvq(train, ...)
# S3 method for class 'formula'
mlLvq(
formula,
data,
k.nn = 5,
size,
prior,
algorithm = "olvq1",
...,
subset,
na.action
)
# Default S3 method
mlLvq(train, response, k.nn = 5, size, prior, algorithm = "olvq1", ...)
# S3 method for class 'mlLvq'
summary(object, ...)
# S3 method for class 'summary.mlLvq'
print(x, ...)
# S3 method for class 'mlLvq'
predict(
object,
newdata,
type = "class",
method = c("direct", "cv"),
na.action = na.exclude,
...
)
a matrix or data frame with predictors.
further arguments passed to the classification method or its
predict()
method (not used here for now).
a formula with left term being the factor variable to predict
and the right term with the list of independent, predictive variables,
separated with a plus sign. If the data frame provided contains only the
dependent and independent variables, one can use the class ~ .
short
version (that one is strongly encouraged). Variables with minus sign are
eliminated. Calculations on variables are possible according to usual formula
convention (possibly protected by using I()
).
a data.frame to use as a training set.
k used for k-NN number of neighbor considered. Default is 5.
the size of the codebook. Defaults to min(round(0.4 \* nc \* (nc - 1 + p/2),0), n) where nc is the number of classes.
probabilities to represent classes in the codebook (default values are the proportions in the training set).
"olvq1"
(by default, the optimized 'lvq1' version), or
"lvq1"
, "lvq2"
, "lvq3"
.
index vector with the cases to define the training set in use (this argument must be named, if provided).
function to specify the action to be taken if NA
s are
found. For [ml_lvq)] na.fail
is used by default. The calculation is
stopped if there is any NA
in the data. Another option is na.omit
,
where cases with missing values on any required variable are dropped (this
argument must be named, if provided). For the predict()
method, the
default, and most suitable option, is na.exclude
. In that case, rows with
NA
s in newdata=
are excluded from prediction, but reinjected in the
final results so that the number of items is still the same (and in the
same order as newdata=
).
[ml_lvq)]: R:ml_lvq)
a vector of factor of the classes.
an mlLvq object
a new dataset with same conformation as the training set (same variables, except may by the class for classification or dependent variable for regression). Usually a test set, or a new dataset to be predicted.
the type of prediction to return. For this method, only "class"
is accepted, and it is the default. It returns the predicted classes.
"direct"
(default) or "cv"
. "direct"
predicts new cases in
newdata=
if this argument is provided, or the cases in the training set
if not. Take care that not providing newdata=
means that you just
calculate the self-consistency of the classifier but cannot use the
metrics derived from these results for the assessment of its performances.
Either use a different dataset in newdata=
or use the alternate
cross-validation ("cv") technique. If you specify method = "cv"
then
cvpredict()
is used and you cannot provide newdata=
in that case.
ml_lvq()
/mlLvq()
creates an mlLvq, mlearning object
containing the classifier and a lot of additional metadata used by the
functions and methods you can apply to it like predict()
or
cvpredict()
. In case you want to program new functions or extract
specific components, inspect the "unclassed" object using unclass()
.
mlearning()
, cvpredict()
, confusion()
, also class::olvq1()
,
class::lvq1()
, class::lvq2()
, and class::lvq3()
that actually do the
classification.
# Prepare data: split into training set (2/3) and test set (1/3)
data("iris", package = "datasets")
train <- c(1:34, 51:83, 101:133)
iris_train <- iris[train, ]
iris_test <- iris[-train, ]
# One case with missing data in train set, and another case in test set
iris_train[1, 1] <- NA
iris_test[25, 2] <- NA
iris_lvq <- ml_lvq(data = iris_train, Species ~ .)
summary(iris_lvq)
#> Codebook:
#> Class Sepal.Length Sepal.Width Petal.Length Petal.Width
#> 5 setosa 4.857229 3.289157 1.466867 0.2108434
#> 33 setosa 5.489655 4.074138 1.422414 0.2603448
#> 68 versicolor 5.652671 2.605862 3.875345 1.1383086
#> 66 versicolor 6.518048 3.077022 4.479406 1.3286514
#> 108 virginica 6.524326 2.925320 5.560317 2.0350052
#> 132 virginica 7.606122 3.132653 6.504082 2.0673469
predict(iris_lvq) # This object only returns classes
#> [1] setosa setosa setosa setosa setosa setosa
#> [7] setosa setosa setosa setosa setosa setosa
#> [13] setosa setosa setosa setosa setosa setosa
#> [19] setosa setosa setosa setosa setosa setosa
#> [25] setosa setosa setosa setosa setosa setosa
#> [31] setosa setosa setosa versicolor versicolor versicolor
#> [37] versicolor versicolor versicolor versicolor versicolor versicolor
#> [43] versicolor versicolor versicolor versicolor versicolor versicolor
#> [49] versicolor versicolor versicolor versicolor versicolor versicolor
#> [55] versicolor versicolor versicolor versicolor versicolor versicolor
#> [61] versicolor versicolor versicolor versicolor versicolor versicolor
#> [67] virginica virginica virginica virginica virginica virginica
#> [73] versicolor virginica virginica virginica virginica virginica
#> [79] virginica virginica virginica virginica virginica virginica
#> [85] virginica versicolor virginica virginica virginica versicolor
#> [91] virginica virginica versicolor versicolor virginica virginica
#> [97] virginica virginica virginica
#> Levels: setosa versicolor virginica
#' # Self-consistency, do not use for assessing classifier performances!
confusion(iris_lvq)
#> 99 items classified with 94 true positives (error rate = 5.1%)
#> Predicted
#> Actual 01 02 03 (sum) (FNR%)
#> 01 setosa 33 0 0 33 0
#> 02 versicolor 0 33 0 33 0
#> 03 virginica 0 5 28 33 15
#> (sum) 33 38 28 99 5
# Use an independent test set instead
confusion(predict(iris_lvq, newdata = iris_test), iris_test$Species)
#> 50 items classified with 46 true positives (error rate = 8%)
#> Predicted
#> Actual 01 02 03 04 (sum) (FNR%)
#> 01 versicolor 15 1 0 1 17 12
#> 02 virginica 2 15 0 0 17 12
#> 03 setosa 0 0 16 0 16 0
#> 04 NA 0 0 0 0 0
#> (sum) 17 16 16 1 50 8