The fundamental tradeoff has two parts:
- How small you can make the training error
- How well training error approximates the test error.
Generally,
- Training error goes down when the model gets more complicated, however
- Approximation error goes up when the model gets more complicated