- Set $y_{i}=+1$ for one class (“important”)
- Set $y_{i}=−1$ for the other class (“not important”)
- To predict, we look at whether $w_{T}x_{i}$ is closer to +1 or -1
- $y^ _{i}=sign(w_{T}x_{i})$

Least squares error may overpenalize. Only thing we care about is the sign, not how far away it is from the decision boundary.

Could we instead minimize number of classification errors? This is called the 0-1 loss function: you either get the classification wrong (1) or right (0).

$L(i,j)={01 i=ji=j $Illustration above is if $y_{i}=1$. Flip for $y_{i}=−1$

Unfortunately, 0-1 Loss is non-convex. We can, once again, use a convex approximation which is called the Hinge loss:

$L(i,j)=max(0,1−y_{i}w_{T}x_{i})$

See also: SVM

This is an upper bound on the 0-1 loss (as illustrated by the picture). For example, if the hinge loss is 18.3, then the number of training errors is at most 18.

Similarly, we can use the log-sum-exp trick to get the logistic loss which is convex *and* differentiable.

$L(i,j)≈g(1+exp(−y_{i}w_{T}x_{i}))$

### Perceptron

Only works for *linearly-separable* data

- Searches for a $w$ such that $sign(w_{T}x_{i})=y_{i},∀i$
- Intuition is that you search for the ledge
- Start with $w_{0}=0$
- Classify each example until we reach a mistake
- Then, update $w$ to $w_{t+1}=w_{t}+y_{i}x_{i}$

- If a perfect classifier exists, this algorithm finds one in finite number of steps