jzhao.xyz

Search

Search IconIcon to open search

k-Nearest Neighbours (KNN)

Last updated Sep 23, 2022 Edit Source

To classify an example, we find the $k$ examples closest to the example and take the mode of the $k$ examples.

Works based off of the assumption that similar features are likely to have similar labels

As $k$ grows, training error increases and approximation error decreases.

We measure distance using the “norm” between feature vectors. The most common norm is the L2-Norm or Euclidean Norm

# Performance

KNN can suck in high dimensions (see: curse of dimensionality)